Test Report: Docker_macOS 15909

                    
                      e35e2c770ef92dfe730882c95f60d10525bed15b:2023-02-22:28027
                    
                

Test fail (16/306)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (259.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-292000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0222 20:32:26.929903    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:34:43.080232    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:35:03.149851    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.155453    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.165588    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.186138    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.226945    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.307438    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.467629    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.787740    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:04.428019    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:05.708120    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:08.268341    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:10.768346    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:35:13.388620    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:23.629576    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:44.109837    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-292000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m19.117969949s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-292000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-292000 in cluster ingress-addon-legacy-292000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 20:32:04.364557    6079 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:32:04.364728    6079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:32:04.364733    6079 out.go:309] Setting ErrFile to fd 2...
	I0222 20:32:04.364737    6079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:32:04.364849    6079 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 20:32:04.366288    6079 out.go:303] Setting JSON to false
	I0222 20:32:04.384995    6079 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1899,"bootTime":1677124825,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 20:32:04.385061    6079 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 20:32:04.406908    6079 out.go:177] * [ingress-addon-legacy-292000] minikube v1.29.0 on Darwin 13.2
	I0222 20:32:04.428916    6079 notify.go:220] Checking for updates...
	I0222 20:32:04.450642    6079 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 20:32:04.472923    6079 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:32:04.494770    6079 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 20:32:04.515600    6079 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 20:32:04.536935    6079 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 20:32:04.558711    6079 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 20:32:04.579984    6079 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 20:32:04.641015    6079 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 20:32:04.641173    6079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:32:04.784649    6079 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 04:32:04.690766088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:32:04.806603    6079 out.go:177] * Using the docker driver based on user configuration
	I0222 20:32:04.850210    6079 start.go:296] selected driver: docker
	I0222 20:32:04.850288    6079 start.go:857] validating driver "docker" against <nil>
	I0222 20:32:04.850310    6079 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 20:32:04.854194    6079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:32:04.995790    6079 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 04:32:04.904263495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:32:04.995917    6079 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0222 20:32:04.996093    6079 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0222 20:32:05.020066    6079 out.go:177] * Using Docker Desktop driver with root privileges
	I0222 20:32:05.041866    6079 cni.go:84] Creating CNI manager for ""
	I0222 20:32:05.041905    6079 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0222 20:32:05.041923    6079 start_flags.go:319] config:
	{Name:ingress-addon-legacy-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-292000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:32:05.084783    6079 out.go:177] * Starting control plane node ingress-addon-legacy-292000 in cluster ingress-addon-legacy-292000
	I0222 20:32:05.106943    6079 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 20:32:05.128666    6079 out.go:177] * Pulling base image ...
	I0222 20:32:05.170906    6079 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0222 20:32:05.170965    6079 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 20:32:05.226462    6079 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 20:32:05.226488    6079 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 20:32:05.283975    6079 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0222 20:32:05.284016    6079 cache.go:57] Caching tarball of preloaded images
	I0222 20:32:05.284419    6079 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0222 20:32:05.306327    6079 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0222 20:32:05.349070    6079 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0222 20:32:05.582930    6079 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0222 20:32:13.443498    6079 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0222 20:32:13.443665    6079 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0222 20:32:14.066075    6079 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0222 20:32:14.066340    6079 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/config.json ...
	I0222 20:32:14.066366    6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/config.json: {Name:mkf72cad213af89d13db2bc5119e02acf8dda0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:32:14.066676    6079 cache.go:193] Successfully downloaded all kic artifacts
	I0222 20:32:14.066701    6079 start.go:364] acquiring machines lock for ingress-addon-legacy-292000: {Name:mk4d7b66f3190c7c8ddc1c191fefbad8ee44f2ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 20:32:14.066831    6079 start.go:368] acquired machines lock for "ingress-addon-legacy-292000" in 122.725µs
	I0222 20:32:14.066856    6079 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-292000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 20:32:14.066899    6079 start.go:125] createHost starting for "" (driver="docker")
	I0222 20:32:14.101190    6079 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0222 20:32:14.101563    6079 start.go:159] libmachine.API.Create for "ingress-addon-legacy-292000" (driver="docker")
	I0222 20:32:14.101606    6079 client.go:168] LocalClient.Create starting
	I0222 20:32:14.101824    6079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem
	I0222 20:32:14.101915    6079 main.go:141] libmachine: Decoding PEM data...
	I0222 20:32:14.101952    6079 main.go:141] libmachine: Parsing certificate...
	I0222 20:32:14.102064    6079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem
	I0222 20:32:14.102130    6079 main.go:141] libmachine: Decoding PEM data...
	I0222 20:32:14.102147    6079 main.go:141] libmachine: Parsing certificate...
	I0222 20:32:14.123885    6079 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0222 20:32:14.180732    6079 cli_runner.go:211] docker network inspect ingress-addon-legacy-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0222 20:32:14.180831    6079 network_create.go:281] running [docker network inspect ingress-addon-legacy-292000] to gather additional debugging logs...
	I0222 20:32:14.180850    6079 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-292000
	W0222 20:32:14.234568    6079 cli_runner.go:211] docker network inspect ingress-addon-legacy-292000 returned with exit code 1
	I0222 20:32:14.234595    6079 network_create.go:284] error running [docker network inspect ingress-addon-legacy-292000]: docker network inspect ingress-addon-legacy-292000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-292000
	I0222 20:32:14.234606    6079 network_create.go:286] output of [docker network inspect ingress-addon-legacy-292000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-292000
	
	** /stderr **
	I0222 20:32:14.234695    6079 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0222 20:32:14.289750    6079 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001275480}
	I0222 20:32:14.289793    6079 network_create.go:123] attempt to create docker network ingress-addon-legacy-292000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0222 20:32:14.289869    6079 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-292000 ingress-addon-legacy-292000
	I0222 20:32:14.418843    6079 network_create.go:107] docker network ingress-addon-legacy-292000 192.168.49.0/24 created
	I0222 20:32:14.418876    6079 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-292000" container
	I0222 20:32:14.418991    6079 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0222 20:32:14.473957    6079 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-292000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-292000 --label created_by.minikube.sigs.k8s.io=true
	I0222 20:32:14.530025    6079 oci.go:103] Successfully created a docker volume ingress-addon-legacy-292000
	I0222 20:32:14.530188    6079 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-292000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-292000 --entrypoint /usr/bin/test -v ingress-addon-legacy-292000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0222 20:32:14.969321    6079 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-292000
	I0222 20:32:14.969358    6079 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0222 20:32:14.969372    6079 kic.go:190] Starting extracting preloaded images to volume ...
	I0222 20:32:14.969502    6079 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-292000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0222 20:32:21.077634    6079 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-292000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.108149443s)
	I0222 20:32:21.077652    6079 kic.go:199] duration metric: took 6.108350 seconds to extract preloaded images to volume
	I0222 20:32:21.077770    6079 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0222 20:32:21.222237    6079 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-292000 --name ingress-addon-legacy-292000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-292000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-292000 --network ingress-addon-legacy-292000 --ip 192.168.49.2 --volume ingress-addon-legacy-292000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0222 20:32:21.581086    6079 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Running}}
	I0222 20:32:21.644392    6079 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Status}}
	I0222 20:32:21.709609    6079 cli_runner.go:164] Run: docker exec ingress-addon-legacy-292000 stat /var/lib/dpkg/alternatives/iptables
	I0222 20:32:21.823524    6079 oci.go:144] the created container "ingress-addon-legacy-292000" has a running status.
	I0222 20:32:21.823558    6079 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa...
	I0222 20:32:21.951543    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0222 20:32:21.951616    6079 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0222 20:32:22.056541    6079 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Status}}
	I0222 20:32:22.113152    6079 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0222 20:32:22.113171    6079 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-292000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0222 20:32:22.217736    6079 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Status}}
	I0222 20:32:22.276734    6079 machine.go:88] provisioning docker machine ...
	I0222 20:32:22.276779    6079 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-292000"
	I0222 20:32:22.276887    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:22.334835    6079 main.go:141] libmachine: Using SSH client type: native
	I0222 20:32:22.335258    6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50506 <nil> <nil>}
	I0222 20:32:22.335275    6079 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-292000 && echo "ingress-addon-legacy-292000" | sudo tee /etc/hostname
	I0222 20:32:22.480058    6079 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-292000
	
	I0222 20:32:22.480146    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:22.538189    6079 main.go:141] libmachine: Using SSH client type: native
	I0222 20:32:22.538538    6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50506 <nil> <nil>}
	I0222 20:32:22.538554    6079 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-292000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-292000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-292000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 20:32:22.673088    6079 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 20:32:22.673110    6079 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 20:32:22.673127    6079 ubuntu.go:177] setting up certificates
	I0222 20:32:22.673134    6079 provision.go:83] configureAuth start
	I0222 20:32:22.673208    6079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-292000
	I0222 20:32:22.730337    6079 provision.go:138] copyHostCerts
	I0222 20:32:22.730388    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:32:22.730454    6079 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 20:32:22.730461    6079 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:32:22.730590    6079 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 20:32:22.730782    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:32:22.730825    6079 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 20:32:22.730830    6079 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:32:22.730894    6079 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 20:32:22.731013    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:32:22.731049    6079 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 20:32:22.731054    6079 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:32:22.731120    6079 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 20:32:22.731249    6079 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-292000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-292000]
	I0222 20:32:22.786277    6079 provision.go:172] copyRemoteCerts
	I0222 20:32:22.786349    6079 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 20:32:22.786405    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:22.843979    6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
	I0222 20:32:22.939318    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0222 20:32:22.939405    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 20:32:22.956590    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0222 20:32:22.956677    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0222 20:32:22.973665    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0222 20:32:22.973743    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0222 20:32:22.990935    6079 provision.go:86] duration metric: configureAuth took 317.787881ms
	I0222 20:32:22.990953    6079 ubuntu.go:193] setting minikube options for container-runtime
	I0222 20:32:22.991174    6079 config.go:182] Loaded profile config "ingress-addon-legacy-292000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0222 20:32:22.991260    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:23.048578    6079 main.go:141] libmachine: Using SSH client type: native
	I0222 20:32:23.048947    6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50506 <nil> <nil>}
	I0222 20:32:23.048961    6079 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 20:32:23.183936    6079 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 20:32:23.183964    6079 ubuntu.go:71] root file system type: overlay
	I0222 20:32:23.184061    6079 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 20:32:23.184181    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:23.243298    6079 main.go:141] libmachine: Using SSH client type: native
	I0222 20:32:23.243658    6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50506 <nil> <nil>}
	I0222 20:32:23.243721    6079 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 20:32:23.387699    6079 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 20:32:23.387801    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:23.446987    6079 main.go:141] libmachine: Using SSH client type: native
	I0222 20:32:23.447364    6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50506 <nil> <nil>}
	I0222 20:32:23.447377    6079 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 20:32:24.054813    6079 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 04:32:23.385827822 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0222 20:32:24.054846    6079 machine.go:91] provisioned docker machine in 1.778110761s
	I0222 20:32:24.054851    6079 client.go:171] LocalClient.Create took 9.953353786s
	I0222 20:32:24.054924    6079 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-292000" took 9.953474955s
	I0222 20:32:24.054996    6079 start.go:300] post-start starting for "ingress-addon-legacy-292000" (driver="docker")
	I0222 20:32:24.055008    6079 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 20:32:24.055138    6079 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 20:32:24.055218    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:24.116228    6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
	I0222 20:32:24.212198    6079 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 20:32:24.215853    6079 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 20:32:24.215870    6079 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 20:32:24.215885    6079 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 20:32:24.215891    6079 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 20:32:24.215901    6079 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 20:32:24.216001    6079 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 20:32:24.216176    6079 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 20:32:24.216182    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /etc/ssl/certs/31332.pem
	I0222 20:32:24.216378    6079 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 20:32:24.223632    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:32:24.240926    6079 start.go:303] post-start completed in 185.916538ms
	I0222 20:32:24.241452    6079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-292000
	I0222 20:32:24.299964    6079 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/config.json ...
	I0222 20:32:24.300383    6079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 20:32:24.300444    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:24.358430    6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
	I0222 20:32:24.452899    6079 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 20:32:24.457576    6079 start.go:128] duration metric: createHost completed in 10.390789341s
	I0222 20:32:24.457591    6079 start.go:83] releasing machines lock for "ingress-addon-legacy-292000", held for 10.390872038s
	I0222 20:32:24.457688    6079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-292000
	I0222 20:32:24.515482    6079 ssh_runner.go:195] Run: cat /version.json
	I0222 20:32:24.515525    6079 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0222 20:32:24.515555    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:24.515591    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:24.579954    6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
	I0222 20:32:24.580122    6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
	I0222 20:32:24.930410    6079 ssh_runner.go:195] Run: systemctl --version
	I0222 20:32:24.934960    6079 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 20:32:24.939793    6079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 20:32:24.959226    6079 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 20:32:24.959305    6079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0222 20:32:24.973401    6079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0222 20:32:24.981459    6079 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0222 20:32:24.981477    6079 start.go:485] detecting cgroup driver to use...
	I0222 20:32:24.981487    6079 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:32:24.981563    6079 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:32:24.994869    6079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0222 20:32:25.003582    6079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 20:32:25.012181    6079 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 20:32:25.012252    6079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 20:32:25.020945    6079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:32:25.029481    6079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 20:32:25.038042    6079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:32:25.046848    6079 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 20:32:25.054689    6079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 20:32:25.063078    6079 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 20:32:25.070718    6079 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 20:32:25.077781    6079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:32:25.145945    6079 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 20:32:25.218288    6079 start.go:485] detecting cgroup driver to use...
	I0222 20:32:25.218309    6079 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:32:25.218374    6079 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 20:32:25.229819    6079 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 20:32:25.229909    6079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 20:32:25.241217    6079 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:32:25.255509    6079 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 20:32:25.368775    6079 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 20:32:25.455324    6079 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 20:32:25.455361    6079 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 20:32:25.469783    6079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:32:25.557374    6079 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 20:32:25.778995    6079 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:32:25.806559    6079 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:32:25.856982    6079 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	I0222 20:32:25.857198    6079 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-292000 dig +short host.docker.internal
	I0222 20:32:25.972637    6079 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 20:32:25.972740    6079 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 20:32:25.977126    6079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:32:25.986997    6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:32:26.044340    6079 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0222 20:32:26.044417    6079 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 20:32:26.065277    6079 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0222 20:32:26.065294    6079 docker.go:560] Images already preloaded, skipping extraction
	I0222 20:32:26.065389    6079 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 20:32:26.085599    6079 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0222 20:32:26.085625    6079 cache_images.go:84] Images are preloaded, skipping loading
	I0222 20:32:26.085704    6079 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 20:32:26.112315    6079 cni.go:84] Creating CNI manager for ""
	I0222 20:32:26.112335    6079 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0222 20:32:26.112348    6079 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 20:32:26.112371    6079 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-292000 NodeName:ingress-addon-legacy-292000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 20:32:26.112491    6079 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-292000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 20:32:26.112585    6079 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-292000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-292000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 20:32:26.112653    6079 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0222 20:32:26.120691    6079 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 20:32:26.120753    6079 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 20:32:26.128255    6079 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0222 20:32:26.141000    6079 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0222 20:32:26.154119    6079 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0222 20:32:26.167278    6079 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0222 20:32:26.171871    6079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:32:26.181807    6079 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000 for IP: 192.168.49.2
	I0222 20:32:26.181825    6079 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:32:26.182023    6079 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 20:32:26.182094    6079 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 20:32:26.182143    6079 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.key
	I0222 20:32:26.182155    6079 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.crt with IP's: []
	I0222 20:32:26.304372    6079 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.crt ...
	I0222 20:32:26.304383    6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.crt: {Name:mk6ec94438c90edcd19fac817403ff3040b023c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:32:26.304689    6079 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.key ...
	I0222 20:32:26.304696    6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.key: {Name:mk49af5056c983e08e2bb81ab9fc7215d6b81b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:32:26.304894    6079 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key.dd3b5fb2
	I0222 20:32:26.304910    6079 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0222 20:32:26.431795    6079 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt.dd3b5fb2 ...
	I0222 20:32:26.431803    6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt.dd3b5fb2: {Name:mk55bf2c39823aa7d85fd59d9723a5e0bafb6355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:32:26.432035    6079 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key.dd3b5fb2 ...
	I0222 20:32:26.432043    6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key.dd3b5fb2: {Name:mk9d6d862608a480c18cd0167b4acdd396312265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:32:26.432224    6079 certs.go:333] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt
	I0222 20:32:26.432386    6079 certs.go:337] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key
	I0222 20:32:26.432561    6079 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key
	I0222 20:32:26.432576    6079 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt with IP's: []
	I0222 20:32:26.507992    6079 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt ...
	I0222 20:32:26.508000    6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt: {Name:mkd141cc39f393f693905abe7d9dd8211695c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:32:26.508342    6079 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key ...
	I0222 20:32:26.508350    6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key: {Name:mk427519864f13b3c5d07c7ade41a7d5cc7d2659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:32:26.508538    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0222 20:32:26.508567    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0222 20:32:26.508593    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0222 20:32:26.508613    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0222 20:32:26.508633    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0222 20:32:26.508652    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0222 20:32:26.508671    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0222 20:32:26.508694    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0222 20:32:26.508794    6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 20:32:26.508841    6079 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 20:32:26.508851    6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 20:32:26.508891    6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 20:32:26.508925    6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 20:32:26.508955    6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 20:32:26.509021    6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:32:26.509054    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /usr/share/ca-certificates/31332.pem
	I0222 20:32:26.509075    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:32:26.509092    6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem -> /usr/share/ca-certificates/3133.pem
	I0222 20:32:26.509595    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 20:32:26.528835    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0222 20:32:26.546438    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 20:32:26.563996    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0222 20:32:26.580950    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 20:32:26.598172    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 20:32:26.615134    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 20:32:26.632846    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 20:32:26.651086    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 20:32:26.669295    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 20:32:26.686954    6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 20:32:26.704389    6079 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 20:32:26.717808    6079 ssh_runner.go:195] Run: openssl version
	I0222 20:32:26.723511    6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 20:32:26.731583    6079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 20:32:26.735858    6079 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 20:32:26.735912    6079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 20:32:26.741381    6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 20:32:26.749532    6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 20:32:26.757585    6079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:32:26.761836    6079 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:32:26.761885    6079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:32:26.767463    6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 20:32:26.775465    6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 20:32:26.783646    6079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 20:32:26.787912    6079 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 20:32:26.787957    6079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 20:32:26.793500    6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 20:32:26.801362    6079 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-292000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:32:26.801505    6079 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 20:32:26.821128    6079 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 20:32:26.829136    6079 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 20:32:26.836786    6079 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 20:32:26.836838    6079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 20:32:26.844399    6079 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 20:32:26.844426    6079 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 20:32:26.893688    6079 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0222 20:32:26.893751    6079 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 20:32:27.062272    6079 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 20:32:27.062365    6079 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 20:32:27.062505    6079 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 20:32:27.218238    6079 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 20:32:27.218896    6079 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 20:32:27.218961    6079 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0222 20:32:27.291119    6079 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 20:32:27.332668    6079 out.go:204]   - Generating certificates and keys ...
	I0222 20:32:27.332785    6079 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 20:32:27.332864    6079 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 20:32:27.622312    6079 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0222 20:32:27.732857    6079 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0222 20:32:27.851448    6079 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0222 20:32:27.938217    6079 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0222 20:32:28.043180    6079 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0222 20:32:28.043389    6079 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0222 20:32:28.223115    6079 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0222 20:32:28.223358    6079 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0222 20:32:28.368045    6079 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0222 20:32:28.466167    6079 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0222 20:32:28.547739    6079 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0222 20:32:28.547847    6079 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 20:32:28.699930    6079 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 20:32:29.005866    6079 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 20:32:29.216630    6079 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 20:32:29.435835    6079 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 20:32:29.436494    6079 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 20:32:29.458090    6079 out.go:204]   - Booting up control plane ...
	I0222 20:32:29.458230    6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 20:32:29.458359    6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 20:32:29.458452    6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 20:32:29.458561    6079 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 20:32:29.458735    6079 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 20:33:09.445963    6079 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 20:33:09.447038    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:33:09.447244    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:33:14.448612    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:33:14.448830    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:33:24.449747    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:33:24.449939    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:33:44.451161    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:33:44.451383    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:34:24.452063    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:34:24.452340    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:34:24.452363    6079 kubeadm.go:322] 
	I0222 20:34:24.452414    6079 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0222 20:34:24.452469    6079 kubeadm.go:322] 		timed out waiting for the condition
	I0222 20:34:24.452475    6079 kubeadm.go:322] 
	I0222 20:34:24.452531    6079 kubeadm.go:322] 	This error is likely caused by:
	I0222 20:34:24.452573    6079 kubeadm.go:322] 		- The kubelet is not running
	I0222 20:34:24.452715    6079 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 20:34:24.452724    6079 kubeadm.go:322] 
	I0222 20:34:24.452868    6079 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 20:34:24.452921    6079 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0222 20:34:24.452978    6079 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0222 20:34:24.452986    6079 kubeadm.go:322] 
	I0222 20:34:24.453133    6079 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 20:34:24.453223    6079 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0222 20:34:24.453237    6079 kubeadm.go:322] 
	I0222 20:34:24.453334    6079 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0222 20:34:24.453393    6079 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0222 20:34:24.453500    6079 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0222 20:34:24.453539    6079 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0222 20:34:24.453548    6079 kubeadm.go:322] 
	I0222 20:34:24.456324    6079 kubeadm.go:322] W0223 04:32:26.892660    1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0222 20:34:24.456478    6079 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 20:34:24.456542    6079 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 20:34:24.456674    6079 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0222 20:34:24.456763    6079 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 20:34:24.456868    6079 kubeadm.go:322] W0223 04:32:29.441635    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0222 20:34:24.456966    6079 kubeadm.go:322] W0223 04:32:29.442545    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0222 20:34:24.457032    6079 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 20:34:24.457096    6079 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0222 20:34:24.457315    6079 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 04:32:26.892660    1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 04:32:29.441635    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 04:32:29.442545    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 04:32:26.892660    1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 04:32:29.441635    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 04:32:29.442545    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0222 20:34:24.457358    6079 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0222 20:34:24.866960    6079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:34:24.876575    6079 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 20:34:24.876647    6079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 20:34:24.883863    6079 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 20:34:24.883884    6079 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 20:34:24.930392    6079 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0222 20:34:24.930457    6079 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 20:34:25.091424    6079 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 20:34:25.091521    6079 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 20:34:25.091612    6079 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 20:34:25.240347    6079 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 20:34:25.240911    6079 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 20:34:25.241119    6079 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0222 20:34:25.317712    6079 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 20:34:25.339324    6079 out.go:204]   - Generating certificates and keys ...
	I0222 20:34:25.339412    6079 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 20:34:25.339485    6079 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 20:34:25.339559    6079 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0222 20:34:25.339632    6079 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0222 20:34:25.339696    6079 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0222 20:34:25.339744    6079 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0222 20:34:25.339812    6079 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0222 20:34:25.339868    6079 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0222 20:34:25.339936    6079 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0222 20:34:25.339998    6079 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0222 20:34:25.340050    6079 kubeadm.go:322] [certs] Using the existing "sa" key
	I0222 20:34:25.340114    6079 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 20:34:25.610187    6079 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 20:34:25.683219    6079 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 20:34:25.803435    6079 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 20:34:25.896546    6079 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 20:34:25.896903    6079 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 20:34:25.918633    6079 out.go:204]   - Booting up control plane ...
	I0222 20:34:25.918750    6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 20:34:25.918848    6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 20:34:25.918942    6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 20:34:25.919051    6079 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 20:34:25.919249    6079 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 20:35:05.905001    6079 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 20:35:05.905862    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:35:05.906032    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:35:10.907045    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:35:10.907276    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:35:20.908013    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:35:20.908155    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:35:40.908898    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:35:40.909068    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:36:20.909016    6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 20:36:20.909221    6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 20:36:20.909233    6079 kubeadm.go:322] 
	I0222 20:36:20.909264    6079 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0222 20:36:20.909294    6079 kubeadm.go:322] 		timed out waiting for the condition
	I0222 20:36:20.909300    6079 kubeadm.go:322] 
	I0222 20:36:20.909324    6079 kubeadm.go:322] 	This error is likely caused by:
	I0222 20:36:20.909349    6079 kubeadm.go:322] 		- The kubelet is not running
	I0222 20:36:20.909445    6079 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 20:36:20.909458    6079 kubeadm.go:322] 
	I0222 20:36:20.909569    6079 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 20:36:20.909604    6079 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0222 20:36:20.909636    6079 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0222 20:36:20.909647    6079 kubeadm.go:322] 
	I0222 20:36:20.909726    6079 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 20:36:20.909794    6079 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0222 20:36:20.909804    6079 kubeadm.go:322] 
	I0222 20:36:20.909879    6079 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0222 20:36:20.909922    6079 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0222 20:36:20.909991    6079 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0222 20:36:20.910022    6079 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0222 20:36:20.910027    6079 kubeadm.go:322] 
	I0222 20:36:20.912472    6079 kubeadm.go:322] W0223 04:34:24.929893    3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0222 20:36:20.912630    6079 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 20:36:20.912718    6079 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 20:36:20.912832    6079 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0222 20:36:20.912915    6079 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 20:36:20.913015    6079 kubeadm.go:322] W0223 04:34:25.900490    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0222 20:36:20.913115    6079 kubeadm.go:322] W0223 04:34:25.901886    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0222 20:36:20.913182    6079 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 20:36:20.913246    6079 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0222 20:36:20.913285    6079 kubeadm.go:403] StartCluster complete in 3m54.114604572s
	I0222 20:36:20.913392    6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 20:36:20.932409    6079 logs.go:278] 0 containers: []
	W0222 20:36:20.932423    6079 logs.go:280] No container was found matching "kube-apiserver"
	I0222 20:36:20.932497    6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 20:36:20.952144    6079 logs.go:278] 0 containers: []
	W0222 20:36:20.952157    6079 logs.go:280] No container was found matching "etcd"
	I0222 20:36:20.952233    6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 20:36:20.971685    6079 logs.go:278] 0 containers: []
	W0222 20:36:20.971697    6079 logs.go:280] No container was found matching "coredns"
	I0222 20:36:20.971769    6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 20:36:20.990595    6079 logs.go:278] 0 containers: []
	W0222 20:36:20.990609    6079 logs.go:280] No container was found matching "kube-scheduler"
	I0222 20:36:20.990686    6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 20:36:21.008982    6079 logs.go:278] 0 containers: []
	W0222 20:36:21.009003    6079 logs.go:280] No container was found matching "kube-proxy"
	I0222 20:36:21.009072    6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 20:36:21.028480    6079 logs.go:278] 0 containers: []
	W0222 20:36:21.028496    6079 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 20:36:21.028566    6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 20:36:21.047367    6079 logs.go:278] 0 containers: []
	W0222 20:36:21.047381    6079 logs.go:280] No container was found matching "kindnet"
	I0222 20:36:21.047449    6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 20:36:21.066208    6079 logs.go:278] 0 containers: []
	W0222 20:36:21.066221    6079 logs.go:280] No container was found matching "storage-provisioner"
	I0222 20:36:21.066228    6079 logs.go:124] Gathering logs for Docker ...
	I0222 20:36:21.066235    6079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 20:36:21.092541    6079 logs.go:124] Gathering logs for container status ...
	I0222 20:36:21.092554    6079 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 20:36:23.136862    6079 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044316675s)
	I0222 20:36:23.136986    6079 logs.go:124] Gathering logs for kubelet ...
	I0222 20:36:23.136993    6079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 20:36:23.175088    6079 logs.go:124] Gathering logs for dmesg ...
	I0222 20:36:23.175103    6079 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 20:36:23.187702    6079 logs.go:124] Gathering logs for describe nodes ...
	I0222 20:36:23.187714    6079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 20:36:23.241376    6079 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0222 20:36:23.241404    6079 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 04:34:24.929893    3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 04:34:25.900490    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 04:34:25.901886    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0222 20:36:23.241421    6079 out.go:239] * 
	* 
	W0222 20:36:23.241552    6079 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 04:34:24.929893    3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 04:34:25.900490    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 04:34:25.901886    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 04:34:24.929893    3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 04:34:25.900490    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 04:34:25.901886    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 20:36:23.241566    6079 out.go:239] * 
	* 
	W0222 20:36:23.242221    6079 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0222 20:36:23.306028    6079 out.go:177] 
	W0222 20:36:23.349169    6079 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 04:34:24.929893    3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 04:34:25.900490    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 04:34:25.901886    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 04:34:24.929893    3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 04:34:25.900490    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 04:34:25.901886    3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 20:36:23.349275    6079 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0222 20:36:23.349379    6079 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0222 20:36:23.371013    6079 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-292000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (259.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (110.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-292000 addons enable ingress --alsologtostderr -v=5
E0222 20:36:25.071788    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:37:46.993138    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-292000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m50.24485868s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 20:36:23.516011    6452 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:36:23.516294    6452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:36:23.516299    6452 out.go:309] Setting ErrFile to fd 2...
	I0222 20:36:23.516303    6452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:36:23.516413    6452 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 20:36:23.538132    6452 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0222 20:36:23.559987    6452 config.go:182] Loaded profile config "ingress-addon-legacy-292000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0222 20:36:23.560003    6452 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-292000"
	I0222 20:36:23.560009    6452 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-292000"
	I0222 20:36:23.560338    6452 host.go:66] Checking if "ingress-addon-legacy-292000" exists ...
	I0222 20:36:23.560828    6452 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Status}}
	I0222 20:36:23.642095    6452 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0222 20:36:23.663877    6452 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0222 20:36:23.684666    6452 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0222 20:36:23.705893    6452 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0222 20:36:23.727403    6452 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0222 20:36:23.727443    6452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0222 20:36:23.727629    6452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:36:23.784430    6452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
	I0222 20:36:23.882800    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:23.934003    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:23.934045    6452 retry.go:31] will retry after 228.052647ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:24.163788    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:24.217459    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:24.217478    6452 retry.go:31] will retry after 382.649133ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:24.600518    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:24.652899    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:24.652919    6452 retry.go:31] will retry after 546.185821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:25.199352    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:25.253816    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:25.253833    6452 retry.go:31] will retry after 684.840312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:25.940941    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:25.994675    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:25.994692    6452 retry.go:31] will retry after 1.005335554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:27.000163    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:27.051293    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:27.051312    6452 retry.go:31] will retry after 1.953835448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:29.007478    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:29.061694    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:29.061720    6452 retry.go:31] will retry after 2.379413521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:31.443382    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:31.497445    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:31.497461    6452 retry.go:31] will retry after 3.820353005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:35.319720    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:35.373525    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:35.373542    6452 retry.go:31] will retry after 5.998607797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:41.373166    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:41.426260    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:41.426277    6452 retry.go:31] will retry after 7.207874075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:48.634349    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:48.686476    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:48.686491    6452 retry.go:31] will retry after 9.993148148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:58.680319    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:36:58.733341    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:36:58.733356    6452 retry.go:31] will retry after 32.054745535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:37:30.788519    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:37:30.842054    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:37:30.842069    6452 retry.go:31] will retry after 42.69928054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:13.543024    6452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0222 20:38:13.596673    6452 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:13.596703    6452 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-292000"
	I0222 20:38:13.618438    6452 out.go:177] * Verifying ingress addon...
	I0222 20:38:13.641682    6452 out.go:177] 
	W0222 20:38:13.663292    6452 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-292000" does not exist: client config: context "ingress-addon-legacy-292000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-292000" does not exist: client config: context "ingress-addon-legacy-292000" does not exist]
	W0222 20:38:13.663326    6452 out.go:239] * 
	* 
	W0222 20:38:13.667051    6452 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0222 20:38:13.688253    6452 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-292000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-292000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab",
	        "Created": "2023-02-23T04:32:21.277649375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 47394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T04:32:21.573046235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/hosts",
	        "LogPath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab-json.log",
	        "Name": "/ingress-addon-legacy-292000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-292000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-292000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-292000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-292000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-292000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-292000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-292000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67ba822becad73eeeb98d26fa037fe8200bbac372862ff8efb4818cb25125b31",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50509"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50510"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/67ba822becad",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-292000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6940408d9d50",
	                        "ingress-addon-legacy-292000"
	                    ],
	                    "NetworkID": "a9c83bedd32b778202e7eee585629df7cea588c0b95aa7b0333585478de56eba",
	                    "EndpointID": "7588ae3d1d307a734ca96e4562493266b368955765a8cb5d2859f1cce0a5f2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-292000 -n ingress-addon-legacy-292000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-292000 -n ingress-addon-legacy-292000: exit status 6 (387.263784ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 20:38:14.153381    6556 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-292000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-292000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (110.70s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (96.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-292000 addons enable ingress-dns --alsologtostderr -v=5
E0222 20:39:43.077809    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-292000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m36.504165948s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 20:38:14.207507    6566 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:38:14.207797    6566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:38:14.207802    6566 out.go:309] Setting ErrFile to fd 2...
	I0222 20:38:14.207806    6566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:38:14.207916    6566 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 20:38:14.230008    6566 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0222 20:38:14.251565    6566 config.go:182] Loaded profile config "ingress-addon-legacy-292000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0222 20:38:14.251591    6566 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-292000"
	I0222 20:38:14.251600    6566 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-292000"
	I0222 20:38:14.251951    6566 host.go:66] Checking if "ingress-addon-legacy-292000" exists ...
	I0222 20:38:14.254073    6566 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Status}}
	I0222 20:38:14.332022    6566 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0222 20:38:14.354002    6566 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0222 20:38:14.375983    6566 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0222 20:38:14.376023    6566 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0222 20:38:14.376174    6566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
	I0222 20:38:14.434357    6566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
	I0222 20:38:14.535345    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:14.585903    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:14.585943    6566 retry.go:31] will retry after 167.103175ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:14.755337    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:14.811160    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:14.811178    6566 retry.go:31] will retry after 211.741222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:15.023860    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:15.078030    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:15.078059    6566 retry.go:31] will retry after 289.207857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:15.367495    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:15.419914    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:15.419930    6566 retry.go:31] will retry after 699.105683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:16.119203    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:16.173730    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:16.173747    6566 retry.go:31] will retry after 1.211396397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:17.385687    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:17.438876    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:17.438894    6566 retry.go:31] will retry after 1.059057532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:18.499447    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:18.553203    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:18.553221    6566 retry.go:31] will retry after 1.433853671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:19.989364    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:20.043368    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:20.043384    6566 retry.go:31] will retry after 4.042015907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:24.087672    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:24.141119    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:24.141134    6566 retry.go:31] will retry after 3.834785169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:27.978122    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:28.031721    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:28.031738    6566 retry.go:31] will retry after 5.056430715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:33.089489    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:33.143466    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:33.143480    6566 retry.go:31] will retry after 16.40673538s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:49.550318    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:38:49.604635    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:38:49.604652    6566 retry.go:31] will retry after 14.53355274s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:39:04.138458    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:39:04.195722    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:39:04.195740    6566 retry.go:31] will retry after 46.323489269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:39:50.520547    6566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0222 20:39:50.575026    6566 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0222 20:39:50.596853    6566 out.go:177] 
	W0222 20:39:50.618176    6566 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0222 20:39:50.618210    6566 out.go:239] * 
	* 
	W0222 20:39:50.621866    6566 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0222 20:39:50.643080    6566 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-292000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-292000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab",
	        "Created": "2023-02-23T04:32:21.277649375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 47394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T04:32:21.573046235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/hosts",
	        "LogPath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab-json.log",
	        "Name": "/ingress-addon-legacy-292000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-292000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-292000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-292000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-292000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-292000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-292000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-292000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67ba822becad73eeeb98d26fa037fe8200bbac372862ff8efb4818cb25125b31",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50509"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50510"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/67ba822becad",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-292000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6940408d9d50",
	                        "ingress-addon-legacy-292000"
	                    ],
	                    "NetworkID": "a9c83bedd32b778202e7eee585629df7cea588c0b95aa7b0333585478de56eba",
	                    "EndpointID": "7588ae3d1d307a734ca96e4562493266b368955765a8cb5d2859f1cce0a5f2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-292000 -n ingress-addon-legacy-292000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-292000 -n ingress-addon-legacy-292000: exit status 6 (390.703026ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 20:39:51.107940    6665 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-292000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-292000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (96.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-292000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-292000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab",
	        "Created": "2023-02-23T04:32:21.277649375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 47394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T04:32:21.573046235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/hosts",
	        "LogPath": "/var/lib/docker/containers/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab/6940408d9d500e27a6679a00bf844b9d542ec06e95e4106b486e4cbfbcc7d3ab-json.log",
	        "Name": "/ingress-addon-legacy-292000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-292000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-292000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1653ed5b89d536e071258e3b650b4da56ba5e73b1b32b689285308ac8be3a5c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-292000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-292000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-292000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-292000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-292000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67ba822becad73eeeb98d26fa037fe8200bbac372862ff8efb4818cb25125b31",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50509"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50510"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/67ba822becad",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-292000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6940408d9d50",
	                        "ingress-addon-legacy-292000"
	                    ],
	                    "NetworkID": "a9c83bedd32b778202e7eee585629df7cea588c0b95aa7b0333585478de56eba",
	                    "EndpointID": "7588ae3d1d307a734ca96e4562493266b368955765a8cb5d2859f1cce0a5f2a9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-292000 -n ingress-addon-legacy-292000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-292000 -n ingress-addon-legacy-292000: exit status 6 (393.073659ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 20:39:51.559568    6679 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-292000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-292000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-216000 -- rollout status deployment/busybox: (3.886156816s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:496: expected 2 Pod IPs but got 1
multinode_test.go:503: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-c4gl8 -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-mhxxv -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-mhxxv -- nslookup kubernetes.io: exit status 1 (160.835058ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-6b86dd6d48-mhxxv could not resolve 'kubernetes.io': exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-c4gl8 -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-mhxxv -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-mhxxv -- nslookup kubernetes.default: exit status 1 (156.831341ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:523: Pod busybox-6b86dd6d48-mhxxv could not resolve 'kubernetes.default': exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-c4gl8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-mhxxv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-mhxxv -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (157.458949ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-6b86dd6d48-mhxxv could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-216000
helpers_test.go:235: (dbg) docker inspect multinode-216000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959",
	        "Created": "2023-02-23T04:45:13.001856261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 91099,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T04:45:13.302858879Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959/hosts",
	        "LogPath": "/var/lib/docker/containers/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959-json.log",
	        "Name": "/multinode-216000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-216000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-216000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/401f4b41ed469b58bb165683b6d891023fa70018515810566276a95365c01965-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/401f4b41ed469b58bb165683b6d891023fa70018515810566276a95365c01965/merged",
	                "UpperDir": "/var/lib/docker/overlay2/401f4b41ed469b58bb165683b6d891023fa70018515810566276a95365c01965/diff",
	                "WorkDir": "/var/lib/docker/overlay2/401f4b41ed469b58bb165683b6d891023fa70018515810566276a95365c01965/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-216000",
	                "Source": "/var/lib/docker/volumes/multinode-216000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-216000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-216000",
	                "name.minikube.sigs.k8s.io": "multinode-216000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d1c2184d95f5b2cfb1b864dc674bd5ec65e2eab2a6e3049daa7f510b2cbbfd3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51081"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51084"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7d1c2184d95f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-216000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d1c0f7655c15",
	                        "multinode-216000"
	                    ],
	                    "NetworkID": "e104cc785eb296a0aa06f78ef3ef072e8cf133e0149d2eac0fdc506bb97fa0a6",
	                    "EndpointID": "bc51ae122101bda0410b593b0e1a23a47ed9855bf39114751624238086d03650",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-216000 -n multinode-216000
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-216000 logs -n 25: (2.702328888s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-005000                                  | second-005000        | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| delete  | -p second-005000                                  | second-005000        | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	| delete  | -p first-003000                                   | first-003000         | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	| start   | -p mount-start-1-599000                           | mount-start-1-599000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-1-599000 ssh -- ls                    | mount-start-1-599000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-621000                           | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-621000 ssh -- ls                    | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-599000                           | mount-start-1-599000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-621000 ssh -- ls                    | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-621000                           | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	| start   | -p mount-start-2-621000                           | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:45 PST |
	| ssh     | mount-start-2-621000 ssh -- ls                    | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:45 PST | 22 Feb 23 20:45 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-621000                           | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:45 PST | 22 Feb 23 20:45 PST |
	| delete  | -p mount-start-1-599000                           | mount-start-1-599000 | jenkins | v1.29.0 | 22 Feb 23 20:45 PST | 22 Feb 23 20:45 PST |
	| start   | -p multinode-216000                               | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:45 PST | 22 Feb 23 20:46 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- apply -f                   | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- rollout                    | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- get pods -o                | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- get pods -o                | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | busybox-6b86dd6d48-c4gl8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST |                     |
	|         | busybox-6b86dd6d48-mhxxv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | busybox-6b86dd6d48-c4gl8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST |                     |
	|         | busybox-6b86dd6d48-mhxxv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | busybox-6b86dd6d48-c4gl8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST |                     |
	|         | busybox-6b86dd6d48-mhxxv -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/22 20:45:04
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0222 20:45:04.991762    8582 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:45:04.991911    8582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:45:04.991916    8582 out.go:309] Setting ErrFile to fd 2...
	I0222 20:45:04.991921    8582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:45:04.992030    8582 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 20:45:04.993498    8582 out.go:303] Setting JSON to false
	I0222 20:45:05.012255    8582 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2680,"bootTime":1677124825,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 20:45:05.012349    8582 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 20:45:05.033766    8582 out.go:177] * [multinode-216000] minikube v1.29.0 on Darwin 13.2
	I0222 20:45:05.076206    8582 notify.go:220] Checking for updates...
	I0222 20:45:05.099860    8582 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 20:45:05.120007    8582 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:05.142037    8582 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 20:45:05.164182    8582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 20:45:05.186248    8582 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 20:45:05.207836    8582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 20:45:05.229298    8582 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 20:45:05.289287    8582 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 20:45:05.289412    8582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:45:05.434754    8582 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 04:45:05.341733097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:45:05.477485    8582 out.go:177] * Using the docker driver based on user configuration
	I0222 20:45:05.498783    8582 start.go:296] selected driver: docker
	I0222 20:45:05.498808    8582 start.go:857] validating driver "docker" against <nil>
	I0222 20:45:05.498827    8582 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 20:45:05.502805    8582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:45:05.643740    8582 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 04:45:05.552070913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:45:05.643851    8582 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0222 20:45:05.644016    8582 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0222 20:45:05.665866    8582 out.go:177] * Using Docker Desktop driver with root privileges
	I0222 20:45:05.687438    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:45:05.687466    8582 cni.go:136] 0 nodes found, recommending kindnet
	I0222 20:45:05.687476    8582 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0222 20:45:05.687499    8582 start_flags.go:319] config:
	{Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:45:05.709548    8582 out.go:177] * Starting control plane node multinode-216000 in cluster multinode-216000
	I0222 20:45:05.731582    8582 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 20:45:05.753496    8582 out.go:177] * Pulling base image ...
	I0222 20:45:05.795718    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:45:05.795782    8582 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 20:45:05.795831    8582 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0222 20:45:05.795849    8582 cache.go:57] Caching tarball of preloaded images
	I0222 20:45:05.796078    8582 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 20:45:05.796098    8582 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0222 20:45:05.800060    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:45:05.800099    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json: {Name:mk00bbe28257c4f32206da7d58c62be073f76fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:05.851536    8582 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 20:45:05.851554    8582 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 20:45:05.851573    8582 cache.go:193] Successfully downloaded all kic artifacts
	I0222 20:45:05.851611    8582 start.go:364] acquiring machines lock for multinode-216000: {Name:mk63d9e74b465394c1d51e2bb23e39dc13c4550b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 20:45:05.851759    8582 start.go:368] acquired machines lock for "multinode-216000" in 135.387µs
	I0222 20:45:05.851791    8582 start.go:93] Provisioning new machine with config: &{Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 20:45:05.851856    8582 start.go:125] createHost starting for "" (driver="docker")
	I0222 20:45:05.873595    8582 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0222 20:45:05.873894    8582 start.go:159] libmachine.API.Create for "multinode-216000" (driver="docker")
	I0222 20:45:05.873940    8582 client.go:168] LocalClient.Create starting
	I0222 20:45:05.874111    8582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem
	I0222 20:45:05.874189    8582 main.go:141] libmachine: Decoding PEM data...
	I0222 20:45:05.874221    8582 main.go:141] libmachine: Parsing certificate...
	I0222 20:45:05.874365    8582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem
	I0222 20:45:05.874437    8582 main.go:141] libmachine: Decoding PEM data...
	I0222 20:45:05.874454    8582 main.go:141] libmachine: Parsing certificate...
	I0222 20:45:05.875240    8582 cli_runner.go:164] Run: docker network inspect multinode-216000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0222 20:45:05.929549    8582 cli_runner.go:211] docker network inspect multinode-216000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0222 20:45:05.929656    8582 network_create.go:281] running [docker network inspect multinode-216000] to gather additional debugging logs...
	I0222 20:45:05.929674    8582 cli_runner.go:164] Run: docker network inspect multinode-216000
	W0222 20:45:05.983920    8582 cli_runner.go:211] docker network inspect multinode-216000 returned with exit code 1
	I0222 20:45:05.983954    8582 network_create.go:284] error running [docker network inspect multinode-216000]: docker network inspect multinode-216000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-216000
	I0222 20:45:05.983972    8582 network_create.go:286] output of [docker network inspect multinode-216000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-216000
	
	** /stderr **
	I0222 20:45:05.984073    8582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0222 20:45:06.041131    8582 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0222 20:45:06.041464    8582 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003264e0}
	I0222 20:45:06.041477    8582 network_create.go:123] attempt to create docker network multinode-216000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0222 20:45:06.041550    8582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-216000 multinode-216000
	I0222 20:45:06.129740    8582 network_create.go:107] docker network multinode-216000 192.168.58.0/24 created
	I0222 20:45:06.129772    8582 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-216000" container
	I0222 20:45:06.129900    8582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0222 20:45:06.185448    8582 cli_runner.go:164] Run: docker volume create multinode-216000 --label name.minikube.sigs.k8s.io=multinode-216000 --label created_by.minikube.sigs.k8s.io=true
	I0222 20:45:06.240948    8582 oci.go:103] Successfully created a docker volume multinode-216000
	I0222 20:45:06.241064    8582 cli_runner.go:164] Run: docker run --rm --name multinode-216000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-216000 --entrypoint /usr/bin/test -v multinode-216000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0222 20:45:06.691830    8582 oci.go:107] Successfully prepared a docker volume multinode-216000
	I0222 20:45:06.691869    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:45:06.691885    8582 kic.go:190] Starting extracting preloaded images to volume ...
	I0222 20:45:06.692005    8582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-216000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0222 20:45:12.800878    8582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-216000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.108858273s)
	I0222 20:45:12.800924    8582 kic.go:199] duration metric: took 6.109107 seconds to extract preloaded images to volume
	I0222 20:45:12.801150    8582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0222 20:45:12.945998    8582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-216000 --name multinode-216000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-216000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-216000 --network multinode-216000 --ip 192.168.58.2 --volume multinode-216000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0222 20:45:13.310104    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Running}}
	I0222 20:45:13.372787    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:13.433738    8582 cli_runner.go:164] Run: docker exec multinode-216000 stat /var/lib/dpkg/alternatives/iptables
	I0222 20:45:13.551142    8582 oci.go:144] the created container "multinode-216000" has a running status.
	I0222 20:45:13.551174    8582 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa...
	I0222 20:45:13.685582    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0222 20:45:13.685651    8582 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0222 20:45:13.795275    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:13.853484    8582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0222 20:45:13.853504    8582 kic_runner.go:114] Args: [docker exec --privileged multinode-216000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0222 20:45:13.959164    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:14.017278    8582 machine.go:88] provisioning docker machine ...
	I0222 20:45:14.017318    8582 ubuntu.go:169] provisioning hostname "multinode-216000"
	I0222 20:45:14.017421    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:14.076626    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:14.077017    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:14.077034    8582 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-216000 && echo "multinode-216000" | sudo tee /etc/hostname
	I0222 20:45:14.222328    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-216000
	
	I0222 20:45:14.222426    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:14.279562    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:14.279904    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:14.279919    8582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-216000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-216000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-216000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 20:45:14.415417    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 20:45:14.415444    8582 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 20:45:14.415467    8582 ubuntu.go:177] setting up certificates
	I0222 20:45:14.415476    8582 provision.go:83] configureAuth start
	I0222 20:45:14.415563    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000
	I0222 20:45:14.472439    8582 provision.go:138] copyHostCerts
	I0222 20:45:14.472487    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:45:14.472540    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 20:45:14.472547    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:45:14.472646    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 20:45:14.472807    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:45:14.472854    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 20:45:14.472860    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:45:14.472923    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 20:45:14.473047    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:45:14.473078    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 20:45:14.473083    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:45:14.473146    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 20:45:14.473264    8582 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.multinode-216000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-216000]
	I0222 20:45:14.751737    8582 provision.go:172] copyRemoteCerts
	I0222 20:45:14.751802    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 20:45:14.751850    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:14.813335    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:14.908812    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0222 20:45:14.908913    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 20:45:14.925649    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0222 20:45:14.925738    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0222 20:45:14.943597    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0222 20:45:14.943681    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0222 20:45:14.960896    8582 provision.go:86] duration metric: configureAuth took 545.412695ms
	I0222 20:45:14.960910    8582 ubuntu.go:193] setting minikube options for container-runtime
	I0222 20:45:14.961094    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:45:14.961198    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:15.039112    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:15.039475    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:15.039491    8582 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 20:45:15.175328    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 20:45:15.175341    8582 ubuntu.go:71] root file system type: overlay
	I0222 20:45:15.175434    8582 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 20:45:15.175522    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:15.234498    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:15.234856    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:15.234903    8582 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 20:45:15.380173    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 20:45:15.380257    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:15.438479    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:15.438848    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:15.438867    8582 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 20:45:16.068520    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 04:45:15.378684440 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0222 20:45:16.068546    8582 machine.go:91] provisioned docker machine in 2.051270175s
	I0222 20:45:16.068553    8582 client.go:171] LocalClient.Create took 10.194722394s
	I0222 20:45:16.068585    8582 start.go:167] duration metric: libmachine.API.Create for "multinode-216000" took 10.19480731s
	I0222 20:45:16.068594    8582 start.go:300] post-start starting for "multinode-216000" (driver="docker")
	I0222 20:45:16.068600    8582 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 20:45:16.068683    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 20:45:16.068750    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:16.128459    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:16.223890    8582 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 20:45:16.227346    8582 command_runner.go:130] > NAME="Ubuntu"
	I0222 20:45:16.227356    8582 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0222 20:45:16.227360    8582 command_runner.go:130] > ID=ubuntu
	I0222 20:45:16.227365    8582 command_runner.go:130] > ID_LIKE=debian
	I0222 20:45:16.227370    8582 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0222 20:45:16.227373    8582 command_runner.go:130] > VERSION_ID="20.04"
	I0222 20:45:16.227379    8582 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0222 20:45:16.227384    8582 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0222 20:45:16.227388    8582 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0222 20:45:16.227401    8582 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0222 20:45:16.227407    8582 command_runner.go:130] > VERSION_CODENAME=focal
	I0222 20:45:16.227413    8582 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0222 20:45:16.227455    8582 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 20:45:16.227473    8582 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 20:45:16.227481    8582 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 20:45:16.227485    8582 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 20:45:16.227495    8582 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 20:45:16.227592    8582 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 20:45:16.227764    8582 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 20:45:16.227775    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /etc/ssl/certs/31332.pem
	I0222 20:45:16.227979    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 20:45:16.235393    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:45:16.253527    8582 start.go:303] post-start completed in 184.925048ms
	I0222 20:45:16.254077    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000
	I0222 20:45:16.313910    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:45:16.314325    8582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 20:45:16.314389    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:16.373772    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:16.467190    8582 command_runner.go:130] > 9%!
	(MISSING)I0222 20:45:16.467268    8582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 20:45:16.471676    8582 command_runner.go:130] > 51G
	I0222 20:45:16.471999    8582 start.go:128] duration metric: createHost completed in 10.62025762s
	I0222 20:45:16.472013    8582 start.go:83] releasing machines lock for "multinode-216000", held for 10.620367722s
	I0222 20:45:16.472099    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000
	I0222 20:45:16.575326    8582 ssh_runner.go:195] Run: cat /version.json
	I0222 20:45:16.575327    8582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0222 20:45:16.575419    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:16.575449    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:16.638565    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:16.638608    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:16.730737    8582 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0222 20:45:16.730871    8582 ssh_runner.go:195] Run: systemctl --version
	I0222 20:45:16.788478    8582 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0222 20:45:16.788535    8582 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0222 20:45:16.788559    8582 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0222 20:45:16.788648    8582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 20:45:16.793272    8582 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0222 20:45:16.793281    8582 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0222 20:45:16.793286    8582 command_runner.go:130] > Device: a6h/166d	Inode: 393237      Links: 1
	I0222 20:45:16.793291    8582 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0222 20:45:16.793297    8582 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0222 20:45:16.793302    8582 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0222 20:45:16.793306    8582 command_runner.go:130] > Change: 2023-02-23 04:22:34.614629251 +0000
	I0222 20:45:16.793309    8582 command_runner.go:130] >  Birth: -
	I0222 20:45:16.793706    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 20:45:16.814078    8582 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 20:45:16.814148    8582 ssh_runner.go:195] Run: which cri-dockerd
	I0222 20:45:16.818009    8582 command_runner.go:130] > /usr/bin/cri-dockerd
	I0222 20:45:16.818126    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0222 20:45:16.825491    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0222 20:45:16.838269    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0222 20:45:16.852840    8582 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0222 20:45:16.852865    8582 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0222 20:45:16.852876    8582 start.go:485] detecting cgroup driver to use...
	I0222 20:45:16.852888    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:45:16.852968    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:45:16.865179    8582 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0222 20:45:16.865215    8582 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0222 20:45:16.866103    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0222 20:45:16.874487    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 20:45:16.883064    8582 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 20:45:16.883124    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 20:45:16.891544    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:45:16.899891    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 20:45:16.908854    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:45:16.917241    8582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 20:45:16.925052    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 20:45:16.933701    8582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 20:45:16.940344    8582 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0222 20:45:16.941138    8582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 20:45:16.948217    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:45:17.016318    8582 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 20:45:17.088679    8582 start.go:485] detecting cgroup driver to use...
	I0222 20:45:17.088698    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:45:17.088767    8582 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 20:45:17.098323    8582 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0222 20:45:17.098601    8582 command_runner.go:130] > [Unit]
	I0222 20:45:17.098610    8582 command_runner.go:130] > Description=Docker Application Container Engine
	I0222 20:45:17.098615    8582 command_runner.go:130] > Documentation=https://docs.docker.com
	I0222 20:45:17.098620    8582 command_runner.go:130] > BindsTo=containerd.service
	I0222 20:45:17.098627    8582 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0222 20:45:17.098636    8582 command_runner.go:130] > Wants=network-online.target
	I0222 20:45:17.098660    8582 command_runner.go:130] > Requires=docker.socket
	I0222 20:45:17.098674    8582 command_runner.go:130] > StartLimitBurst=3
	I0222 20:45:17.098685    8582 command_runner.go:130] > StartLimitIntervalSec=60
	I0222 20:45:17.098699    8582 command_runner.go:130] > [Service]
	I0222 20:45:17.098709    8582 command_runner.go:130] > Type=notify
	I0222 20:45:17.098725    8582 command_runner.go:130] > Restart=on-failure
	I0222 20:45:17.098741    8582 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0222 20:45:17.098750    8582 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0222 20:45:17.098756    8582 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0222 20:45:17.098762    8582 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0222 20:45:17.098767    8582 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0222 20:45:17.098772    8582 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0222 20:45:17.098777    8582 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0222 20:45:17.098788    8582 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0222 20:45:17.098794    8582 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0222 20:45:17.098797    8582 command_runner.go:130] > ExecStart=
	I0222 20:45:17.098808    8582 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0222 20:45:17.098813    8582 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0222 20:45:17.098819    8582 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0222 20:45:17.098841    8582 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0222 20:45:17.098849    8582 command_runner.go:130] > LimitNOFILE=infinity
	I0222 20:45:17.098853    8582 command_runner.go:130] > LimitNPROC=infinity
	I0222 20:45:17.098856    8582 command_runner.go:130] > LimitCORE=infinity
	I0222 20:45:17.098865    8582 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0222 20:45:17.098877    8582 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0222 20:45:17.098888    8582 command_runner.go:130] > TasksMax=infinity
	I0222 20:45:17.098895    8582 command_runner.go:130] > TimeoutStartSec=0
	I0222 20:45:17.098902    8582 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0222 20:45:17.098912    8582 command_runner.go:130] > Delegate=yes
	I0222 20:45:17.098924    8582 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0222 20:45:17.098933    8582 command_runner.go:130] > KillMode=process
	I0222 20:45:17.098945    8582 command_runner.go:130] > [Install]
	I0222 20:45:17.098951    8582 command_runner.go:130] > WantedBy=multi-user.target
	I0222 20:45:17.099285    8582 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 20:45:17.099367    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 20:45:17.110324    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:45:17.123770    8582 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0222 20:45:17.123794    8582 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0222 20:45:17.124722    8582 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 20:45:17.231373    8582 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 20:45:17.294094    8582 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 20:45:17.294115    8582 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 20:45:17.331972    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:45:17.429640    8582 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 20:45:17.653587    8582 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 20:45:17.727213    8582 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0222 20:45:17.727382    8582 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0222 20:45:17.796152    8582 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 20:45:17.866182    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:45:17.936312    8582 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0222 20:45:17.956762    8582 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0222 20:45:17.956849    8582 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0222 20:45:17.960898    8582 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0222 20:45:17.960909    8582 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0222 20:45:17.960915    8582 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0222 20:45:17.960920    8582 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0222 20:45:17.960928    8582 command_runner.go:130] > Access: 2023-02-23 04:45:17.943684243 +0000
	I0222 20:45:17.960936    8582 command_runner.go:130] > Modify: 2023-02-23 04:45:17.943684243 +0000
	I0222 20:45:17.960941    8582 command_runner.go:130] > Change: 2023-02-23 04:45:17.953684243 +0000
	I0222 20:45:17.960944    8582 command_runner.go:130] >  Birth: -
	I0222 20:45:17.960964    8582 start.go:553] Will wait 60s for crictl version
	I0222 20:45:17.961005    8582 ssh_runner.go:195] Run: which crictl
	I0222 20:45:17.964655    8582 command_runner.go:130] > /usr/bin/crictl
	I0222 20:45:17.964838    8582 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0222 20:45:18.060867    8582 command_runner.go:130] > Version:  0.1.0
	I0222 20:45:18.060879    8582 command_runner.go:130] > RuntimeName:  docker
	I0222 20:45:18.060884    8582 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0222 20:45:18.060889    8582 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0222 20:45:18.062862    8582 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0222 20:45:18.062943    8582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:45:18.086461    8582 command_runner.go:130] > 23.0.1
	I0222 20:45:18.088071    8582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:45:18.110793    8582 command_runner.go:130] > 23.0.1
	I0222 20:45:18.155382    8582 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0222 20:45:18.155536    8582 cli_runner.go:164] Run: docker exec -t multinode-216000 dig +short host.docker.internal
	I0222 20:45:18.267541    8582 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 20:45:18.267659    8582 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 20:45:18.272361    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:45:18.282378    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:18.341346    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:45:18.341427    8582 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 20:45:18.360199    8582 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0222 20:45:18.360212    8582 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0222 20:45:18.360217    8582 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0222 20:45:18.360224    8582 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0222 20:45:18.360239    8582 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0222 20:45:18.360244    8582 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0222 20:45:18.360250    8582 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0222 20:45:18.360256    8582 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 20:45:18.362067    8582 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0222 20:45:18.362082    8582 docker.go:560] Images already preloaded, skipping extraction
	I0222 20:45:18.362183    8582 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 20:45:18.380254    8582 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0222 20:45:18.380274    8582 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0222 20:45:18.380282    8582 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0222 20:45:18.380292    8582 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0222 20:45:18.380299    8582 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0222 20:45:18.380306    8582 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0222 20:45:18.380315    8582 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0222 20:45:18.380328    8582 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 20:45:18.381893    8582 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0222 20:45:18.381905    8582 cache_images.go:84] Images are preloaded, skipping loading
	I0222 20:45:18.381998    8582 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 20:45:18.406094    8582 command_runner.go:130] > cgroupfs
	I0222 20:45:18.407712    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:45:18.407725    8582 cni.go:136] 1 nodes found, recommending kindnet
	I0222 20:45:18.407744    8582 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 20:45:18.407765    8582 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-216000 NodeName:multinode-216000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 20:45:18.407883    8582 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-216000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 20:45:18.407963    8582 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-216000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 20:45:18.408033    8582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0222 20:45:18.415368    8582 command_runner.go:130] > kubeadm
	I0222 20:45:18.415380    8582 command_runner.go:130] > kubectl
	I0222 20:45:18.415386    8582 command_runner.go:130] > kubelet
	I0222 20:45:18.416303    8582 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 20:45:18.416357    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 20:45:18.423829    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0222 20:45:18.437399    8582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 20:45:18.450918    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0222 20:45:18.464004    8582 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0222 20:45:18.468378    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:45:18.478968    8582 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000 for IP: 192.168.58.2
	I0222 20:45:18.479007    8582 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.479233    8582 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 20:45:18.479298    8582 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 20:45:18.479350    8582 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key
	I0222 20:45:18.479363    8582 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt with IP's: []
	I0222 20:45:18.807872    8582 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt ...
	I0222 20:45:18.807890    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt: {Name:mk734ac8a5dfe0a534e9eb7b833d4a5e48c8bc37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.808232    8582 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key ...
	I0222 20:45:18.808240    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key: {Name:mka355a8e15740137d1e2e5ff0e4b2c22c313a89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.808486    8582 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key.cee25041
	I0222 20:45:18.808503    8582 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0222 20:45:18.994693    8582 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt.cee25041 ...
	I0222 20:45:18.994706    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt.cee25041: {Name:mk63b66cc283eb07720bb76a77d00d37e04a39d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.994973    8582 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key.cee25041 ...
	I0222 20:45:18.994983    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key.cee25041: {Name:mk8329c082e2a26c2595c267885c85db2235c6f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.995165    8582 certs.go:333] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt
	I0222 20:45:18.995350    8582 certs.go:337] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key
	I0222 20:45:18.995515    8582 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key
	I0222 20:45:18.995532    8582 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt with IP's: []
	I0222 20:45:19.113820    8582 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt ...
	I0222 20:45:19.113828    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt: {Name:mk8138fac670db3215d5364fec33c5ab93eb8c0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:19.114028    8582 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key ...
	I0222 20:45:19.114036    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key: {Name:mk5e8e4f17feb0310021a3cb9d6f540378c4c54b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:19.114212    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0222 20:45:19.114240    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0222 20:45:19.114260    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0222 20:45:19.114282    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0222 20:45:19.114301    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0222 20:45:19.114320    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0222 20:45:19.114337    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0222 20:45:19.114355    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0222 20:45:19.114445    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 20:45:19.114491    8582 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 20:45:19.114502    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 20:45:19.114536    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 20:45:19.114565    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 20:45:19.114593    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 20:45:19.114658    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:45:19.114691    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.114711    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem -> /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.114730    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.115133    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 20:45:19.135159    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0222 20:45:19.153613    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 20:45:19.170940    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0222 20:45:19.188737    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 20:45:19.206355    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 20:45:19.224583    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 20:45:19.242301    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 20:45:19.260332    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 20:45:19.278549    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 20:45:19.295847    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 20:45:19.313581    8582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 20:45:19.326802    8582 ssh_runner.go:195] Run: openssl version
	I0222 20:45:19.332108    8582 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0222 20:45:19.332474    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 20:45:19.340663    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.344543    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.344695    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.344737    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.350047    8582 command_runner.go:130] > b5213941
	I0222 20:45:19.350235    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 20:45:19.358345    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 20:45:19.366377    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.370383    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.370487    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.370538    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.375953    8582 command_runner.go:130] > 51391683
	I0222 20:45:19.376472    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 20:45:19.384513    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 20:45:19.393148    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.397295    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.397427    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.397484    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.402536    8582 command_runner.go:130] > 3ec20f2e
	I0222 20:45:19.402861    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 20:45:19.411005    8582 kubeadm.go:401] StartCluster: {Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:45:19.411119    8582 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 20:45:19.430184    8582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 20:45:19.438275    8582 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0222 20:45:19.438286    8582 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0222 20:45:19.438291    8582 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0222 20:45:19.438350    8582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 20:45:19.445915    8582 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 20:45:19.445966    8582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 20:45:19.453427    8582 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0222 20:45:19.453448    8582 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0222 20:45:19.453455    8582 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0222 20:45:19.453461    8582 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 20:45:19.453483    8582 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 20:45:19.453502    8582 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 20:45:19.505948    8582 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0222 20:45:19.505948    8582 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0222 20:45:19.505994    8582 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 20:45:19.506008    8582 command_runner.go:130] > [preflight] Running pre-flight checks
	I0222 20:45:19.613107    8582 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 20:45:19.613126    8582 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 20:45:19.613208    8582 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 20:45:19.613217    8582 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 20:45:19.613306    8582 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 20:45:19.613324    8582 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 20:45:19.743015    8582 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 20:45:19.743037    8582 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 20:45:19.786700    8582 out.go:204]   - Generating certificates and keys ...
	I0222 20:45:19.786782    8582 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0222 20:45:19.786805    8582 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 20:45:19.786865    8582 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0222 20:45:19.786871    8582 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 20:45:19.850709    8582 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0222 20:45:19.850717    8582 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0222 20:45:19.987582    8582 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0222 20:45:19.987592    8582 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0222 20:45:20.111956    8582 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0222 20:45:20.111984    8582 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0222 20:45:20.367090    8582 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0222 20:45:20.367149    8582 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0222 20:45:20.460324    8582 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0222 20:45:20.460337    8582 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0222 20:45:20.460440    8582 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-216000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0222 20:45:20.460448    8582 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-216000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0222 20:45:20.634190    8582 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0222 20:45:20.634208    8582 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0222 20:45:20.634336    8582 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-216000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0222 20:45:20.634348    8582 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-216000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0222 20:45:20.712988    8582 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0222 20:45:20.713007    8582 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0222 20:45:20.783015    8582 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0222 20:45:20.783030    8582 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0222 20:45:20.893467    8582 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0222 20:45:20.893477    8582 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0222 20:45:20.893513    8582 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 20:45:20.893518    8582 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 20:45:21.008488    8582 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 20:45:21.008501    8582 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 20:45:21.326406    8582 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 20:45:21.326428    8582 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 20:45:21.451239    8582 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 20:45:21.451254    8582 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 20:45:21.784843    8582 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 20:45:21.784877    8582 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 20:45:21.797053    8582 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 20:45:21.797084    8582 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 20:45:21.798086    8582 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 20:45:21.798097    8582 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 20:45:21.798144    8582 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0222 20:45:21.798154    8582 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0222 20:45:21.874401    8582 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 20:45:21.874415    8582 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 20:45:21.924722    8582 out.go:204]   - Booting up control plane ...
	I0222 20:45:21.924875    8582 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 20:45:21.924941    8582 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 20:45:21.925026    8582 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 20:45:21.925031    8582 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 20:45:21.925080    8582 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 20:45:21.925091    8582 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 20:45:21.925218    8582 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 20:45:21.925227    8582 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 20:45:21.925379    8582 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 20:45:21.925412    8582 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 20:45:30.880182    8582 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.001447 seconds
	I0222 20:45:30.880189    8582 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.001447 seconds
	I0222 20:45:30.880320    8582 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0222 20:45:30.880330    8582 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0222 20:45:30.891029    8582 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0222 20:45:30.891059    8582 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0222 20:45:31.408095    8582 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0222 20:45:31.408105    8582 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0222 20:45:31.408264    8582 kubeadm.go:322] [mark-control-plane] Marking the node multinode-216000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0222 20:45:31.408278    8582 command_runner.go:130] > [mark-control-plane] Marking the node multinode-216000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0222 20:45:31.917178    8582 kubeadm.go:322] [bootstrap-token] Using token: 5jwevw.jx77rxsr3wyi2rry
	I0222 20:45:31.917198    8582 command_runner.go:130] > [bootstrap-token] Using token: 5jwevw.jx77rxsr3wyi2rry
	I0222 20:45:31.954318    8582 out.go:204]   - Configuring RBAC rules ...
	I0222 20:45:31.954483    8582 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0222 20:45:31.954497    8582 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0222 20:45:31.957489    8582 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0222 20:45:31.957508    8582 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0222 20:45:31.999853    8582 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0222 20:45:31.999861    8582 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0222 20:45:32.002246    8582 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0222 20:45:32.002251    8582 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0222 20:45:32.004573    8582 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0222 20:45:32.004583    8582 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0222 20:45:32.006862    8582 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0222 20:45:32.006871    8582 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0222 20:45:32.015273    8582 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0222 20:45:32.015289    8582 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0222 20:45:32.155838    8582 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0222 20:45:32.155852    8582 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0222 20:45:32.361026    8582 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0222 20:45:32.361044    8582 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0222 20:45:32.361629    8582 kubeadm.go:322] 
	I0222 20:45:32.361720    8582 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0222 20:45:32.361732    8582 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0222 20:45:32.361742    8582 kubeadm.go:322] 
	I0222 20:45:32.361800    8582 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0222 20:45:32.361816    8582 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0222 20:45:32.361823    8582 kubeadm.go:322] 
	I0222 20:45:32.361845    8582 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0222 20:45:32.361850    8582 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0222 20:45:32.361900    8582 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0222 20:45:32.361904    8582 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0222 20:45:32.361943    8582 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0222 20:45:32.361951    8582 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0222 20:45:32.361965    8582 kubeadm.go:322] 
	I0222 20:45:32.362025    8582 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0222 20:45:32.362032    8582 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0222 20:45:32.362038    8582 kubeadm.go:322] 
	I0222 20:45:32.362101    8582 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0222 20:45:32.362109    8582 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0222 20:45:32.362120    8582 kubeadm.go:322] 
	I0222 20:45:32.362169    8582 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0222 20:45:32.362176    8582 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0222 20:45:32.362264    8582 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0222 20:45:32.362275    8582 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0222 20:45:32.362369    8582 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0222 20:45:32.362373    8582 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0222 20:45:32.362385    8582 kubeadm.go:322] 
	I0222 20:45:32.362444    8582 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0222 20:45:32.362449    8582 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0222 20:45:32.362507    8582 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0222 20:45:32.362509    8582 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0222 20:45:32.362516    8582 kubeadm.go:322] 
	I0222 20:45:32.362570    8582 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 5jwevw.jx77rxsr3wyi2rry \
	I0222 20:45:32.362574    8582 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5jwevw.jx77rxsr3wyi2rry \
	I0222 20:45:32.362650    8582 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf \
	I0222 20:45:32.362656    8582 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf \
	I0222 20:45:32.362670    8582 command_runner.go:130] > 	--control-plane 
	I0222 20:45:32.362673    8582 kubeadm.go:322] 	--control-plane 
	I0222 20:45:32.362676    8582 kubeadm.go:322] 
	I0222 20:45:32.362745    8582 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0222 20:45:32.362759    8582 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0222 20:45:32.362771    8582 kubeadm.go:322] 
	I0222 20:45:32.362848    8582 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5jwevw.jx77rxsr3wyi2rry \
	I0222 20:45:32.362858    8582 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5jwevw.jx77rxsr3wyi2rry \
	I0222 20:45:32.362965    8582 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf 
	I0222 20:45:32.362978    8582 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf 
	I0222 20:45:32.419040    8582 kubeadm.go:322] W0223 04:45:19.499041    1297 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0222 20:45:32.419068    8582 command_runner.go:130] ! W0223 04:45:19.499041    1297 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0222 20:45:32.419231    8582 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0222 20:45:32.419250    8582 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0222 20:45:32.419421    8582 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 20:45:32.419430    8582 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 20:45:32.419451    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:45:32.419464    8582 cni.go:136] 1 nodes found, recommending kindnet
	I0222 20:45:32.458559    8582 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0222 20:45:32.501679    8582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0222 20:45:32.506820    8582 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0222 20:45:32.506836    8582 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0222 20:45:32.506847    8582 command_runner.go:130] > Device: a6h/166d	Inode: 267135      Links: 1
	I0222 20:45:32.506878    8582 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0222 20:45:32.506891    8582 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0222 20:45:32.506911    8582 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0222 20:45:32.506918    8582 command_runner.go:130] > Change: 2023-02-23 04:22:33.946629303 +0000
	I0222 20:45:32.506922    8582 command_runner.go:130] >  Birth: -
	I0222 20:45:32.506955    8582 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0222 20:45:32.506965    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0222 20:45:32.520767    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0222 20:45:33.085605    8582 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0222 20:45:33.089214    8582 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0222 20:45:33.094865    8582 command_runner.go:130] > serviceaccount/kindnet created
	I0222 20:45:33.101971    8582 command_runner.go:130] > daemonset.apps/kindnet created
	I0222 20:45:33.108092    8582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0222 20:45:33.108197    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:33.108199    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321 minikube.k8s.io/name=multinode-216000 minikube.k8s.io/updated_at=2023_02_22T20_45_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:33.120620    8582 command_runner.go:130] > -16
	I0222 20:45:33.120829    8582 ops.go:34] apiserver oom_adj: -16
	I0222 20:45:33.235389    8582 command_runner.go:130] > node/multinode-216000 labeled
	I0222 20:45:33.235429    8582 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0222 20:45:33.235530    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:33.298748    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:33.798947    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:33.863655    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:34.299929    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:34.366381    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:34.799577    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:34.862911    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:35.301034    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:35.366783    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:35.799539    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:35.866693    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:36.299697    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:36.362551    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:36.798880    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:36.861454    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:37.299204    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:37.363129    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:37.799017    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:37.865543    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:38.299118    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:38.366698    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:38.799082    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:38.863705    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:39.300415    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:39.362843    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:39.800939    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:39.865957    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:40.299731    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:40.363412    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:40.799062    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:40.863422    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:41.300979    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:41.364245    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:41.798819    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:41.862910    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:42.301010    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:42.368353    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:42.799757    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:42.860598    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:43.298910    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:43.358655    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:43.798911    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:43.862382    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:44.298810    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:44.367729    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:44.800895    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:44.878473    8582 command_runner.go:130] > NAME      SECRETS   AGE
	I0222 20:45:44.878486    8582 command_runner.go:130] > default   0         0s
	I0222 20:45:44.878501    8582 kubeadm.go:1073] duration metric: took 11.770516906s to wait for elevateKubeSystemPrivileges.
	I0222 20:45:44.878511    8582 kubeadm.go:403] StartCluster complete in 25.46780247s
	I0222 20:45:44.878534    8582 settings.go:142] acquiring lock: {Name:mk09b0ae3061a5d1df7256089aea48f15d65cbf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:44.878624    8582 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:44.879095    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:44.879359    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0222 20:45:44.879382    8582 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0222 20:45:44.879439    8582 addons.go:65] Setting storage-provisioner=true in profile "multinode-216000"
	I0222 20:45:44.879459    8582 addons.go:227] Setting addon storage-provisioner=true in "multinode-216000"
	I0222 20:45:44.879460    8582 addons.go:65] Setting default-storageclass=true in profile "multinode-216000"
	I0222 20:45:44.879476    8582 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-216000"
	I0222 20:45:44.879500    8582 host.go:66] Checking if "multinode-216000" exists ...
	I0222 20:45:44.879515    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:45:44.879557    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:44.879742    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:44.879842    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:44.879816    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:45:44.883685    8582 cert_rotation.go:137] Starting client certificate rotation controller
	I0222 20:45:44.883956    8582 round_trippers.go:463] GET https://127.0.0.1:51085/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0222 20:45:44.883966    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:44.883974    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:44.883981    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:44.925690    8582 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0222 20:45:44.925708    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:44.925715    8582 round_trippers.go:580]     Audit-Id: 347f7f79-722b-4f7c-88bc-d8ed156f5606
	I0222 20:45:44.925722    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:44.925726    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:44.925731    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:44.925737    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:44.925746    8582 round_trippers.go:580]     Content-Length: 291
	I0222 20:45:44.925752    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:44 GMT
	I0222 20:45:44.925793    8582 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"302","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0222 20:45:44.926183    8582 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"302","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0222 20:45:44.926214    8582 round_trippers.go:463] PUT https://127.0.0.1:51085/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0222 20:45:44.926218    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:44.926224    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:44.926230    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:44.926235    8582 round_trippers.go:473]     Content-Type: application/json
	I0222 20:45:44.933928    8582 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0222 20:45:44.933947    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:44.933960    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:44.933968    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:44.933986    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:44.933994    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:44.934002    8582 round_trippers.go:580]     Content-Length: 291
	I0222 20:45:44.934010    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:44 GMT
	I0222 20:45:44.934017    8582 round_trippers.go:580]     Audit-Id: 73608203-7243-4885-ac82-d5f47c1f08dd
	I0222 20:45:44.934036    8582 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"328","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0222 20:45:44.979026    8582 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 20:45:44.954997    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:44.979427    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:45:45.015387    8582 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 20:45:45.015937    8582 round_trippers.go:463] GET https://127.0.0.1:51085/apis/storage.k8s.io/v1/storageclasses
	I0222 20:45:45.053309    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:45.053287    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0222 20:45:45.053323    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:45.053333    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:45.053446    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:45.056950    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:45.056983    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:45.056994    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:45.057002    8582 round_trippers.go:580]     Content-Length: 109
	I0222 20:45:45.057010    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:45 GMT
	I0222 20:45:45.057017    8582 round_trippers.go:580]     Audit-Id: 5f21578f-5a11-46bc-83cd-cf8aed0de574
	I0222 20:45:45.057024    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:45.057033    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:45.057045    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:45.057078    8582 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"336"},"items":[]}
	I0222 20:45:45.057427    8582 addons.go:227] Setting addon default-storageclass=true in "multinode-216000"
	I0222 20:45:45.057470    8582 host.go:66] Checking if "multinode-216000" exists ...
	I0222 20:45:45.057994    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:45.065903    8582 command_runner.go:130] > apiVersion: v1
	I0222 20:45:45.065956    8582 command_runner.go:130] > data:
	I0222 20:45:45.065966    8582 command_runner.go:130] >   Corefile: |
	I0222 20:45:45.065972    8582 command_runner.go:130] >     .:53 {
	I0222 20:45:45.065977    8582 command_runner.go:130] >         errors
	I0222 20:45:45.065991    8582 command_runner.go:130] >         health {
	I0222 20:45:45.066007    8582 command_runner.go:130] >            lameduck 5s
	I0222 20:45:45.066017    8582 command_runner.go:130] >         }
	I0222 20:45:45.066023    8582 command_runner.go:130] >         ready
	I0222 20:45:45.066037    8582 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0222 20:45:45.066050    8582 command_runner.go:130] >            pods insecure
	I0222 20:45:45.066063    8582 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0222 20:45:45.066077    8582 command_runner.go:130] >            ttl 30
	I0222 20:45:45.066086    8582 command_runner.go:130] >         }
	I0222 20:45:45.066101    8582 command_runner.go:130] >         prometheus :9153
	I0222 20:45:45.066116    8582 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0222 20:45:45.066133    8582 command_runner.go:130] >            max_concurrent 1000
	I0222 20:45:45.066148    8582 command_runner.go:130] >         }
	I0222 20:45:45.066157    8582 command_runner.go:130] >         cache 30
	I0222 20:45:45.066162    8582 command_runner.go:130] >         loop
	I0222 20:45:45.066170    8582 command_runner.go:130] >         reload
	I0222 20:45:45.066178    8582 command_runner.go:130] >         loadbalance
	I0222 20:45:45.066182    8582 command_runner.go:130] >     }
	I0222 20:45:45.066185    8582 command_runner.go:130] > kind: ConfigMap
	I0222 20:45:45.066189    8582 command_runner.go:130] > metadata:
	I0222 20:45:45.066209    8582 command_runner.go:130] >   creationTimestamp: "2023-02-23T04:45:32Z"
	I0222 20:45:45.066219    8582 command_runner.go:130] >   name: coredns
	I0222 20:45:45.066224    8582 command_runner.go:130] >   namespace: kube-system
	I0222 20:45:45.066228    8582 command_runner.go:130] >   resourceVersion: "229"
	I0222 20:45:45.066236    8582 command_runner.go:130] >   uid: 870d5158-e67f-46a4-a4ff-0208e33d2315
	I0222 20:45:45.066492    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0222 20:45:45.125893    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:45.130388    8582 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0222 20:45:45.130402    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0222 20:45:45.130486    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:45.201315    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:45.335067    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 20:45:45.435009    8582 round_trippers.go:463] GET https://127.0.0.1:51085/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0222 20:45:45.435028    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:45.435034    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:45.435039    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:45.438380    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0222 20:45:45.438666    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:45.438680    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:45.438688    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:45.438700    8582 round_trippers.go:580]     Content-Length: 291
	I0222 20:45:45.438708    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:45 GMT
	I0222 20:45:45.438717    8582 round_trippers.go:580]     Audit-Id: a97bba55-b87b-4670-bd3c-38900e852e3e
	I0222 20:45:45.438733    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:45.438745    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:45.438757    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:45.439026    8582 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"355","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0222 20:45:45.439106    8582 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-216000" context rescaled to 1 replicas
	I0222 20:45:45.439132    8582 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 20:45:45.464740    8582 out.go:177] * Verifying Kubernetes components...
	I0222 20:45:45.505682    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:45:45.638198    8582 command_runner.go:130] > configmap/coredns replaced
	I0222 20:45:45.638228    8582 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0222 20:45:45.854922    8582 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0222 20:45:45.859426    8582 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0222 20:45:45.924955    8582 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0222 20:45:45.933318    8582 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0222 20:45:45.944055    8582 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0222 20:45:45.951160    8582 command_runner.go:130] > pod/storage-provisioner created
	I0222 20:45:45.957315    8582 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0222 20:45:45.981431    8582 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0222 20:45:45.957508    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:46.022904    8582 addons.go:492] enable addons completed in 1.143556258s: enabled=[storage-provisioner default-storageclass]
	I0222 20:45:46.097640    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:46.097888    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:45:46.098139    8582 node_ready.go:35] waiting up to 6m0s for node "multinode-216000" to be "Ready" ...
	I0222 20:45:46.098185    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:46.098190    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.098197    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.098203    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.119497    8582 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0222 20:45:46.119522    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.119532    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.119540    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.119565    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.119579    8582 round_trippers.go:580]     Audit-Id: 38eecc37-9749-47c3-817c-ca66d0e05505
	I0222 20:45:46.119589    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.119598    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.119732    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:46.120270    8582 node_ready.go:49] node "multinode-216000" has status "Ready":"True"
	I0222 20:45:46.120279    8582 node_ready.go:38] duration metric: took 22.126807ms waiting for node "multinode-216000" to be "Ready" ...
	I0222 20:45:46.120286    8582 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 20:45:46.120346    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:45:46.120352    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.120358    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.120365    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.124743    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:46.124764    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.124774    8582 round_trippers.go:580]     Audit-Id: dcefd3fe-1aa3-43e7-8c44-9a1faf1edc15
	I0222 20:45:46.124804    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.124832    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.124840    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.124847    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.124883    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.126729    8582 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"371"},"items":[{"metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 60448 chars]
	I0222 20:45:46.130680    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-48v9r" in "kube-system" namespace to be "Ready" ...
	I0222 20:45:46.130742    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:46.130748    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.130755    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.130760    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.135655    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:46.135669    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.135675    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.135685    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.135696    8582 round_trippers.go:580]     Audit-Id: 7deac8ba-52f4-4761-9cd5-feb96461e1f2
	I0222 20:45:46.135709    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.135724    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.135736    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.135863    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0222 20:45:46.136179    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:46.136206    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.136213    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.136218    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.139514    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:46.139532    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.139538    8582 round_trippers.go:580]     Audit-Id: cd711aab-4a7d-4496-ae61-bf22d5d792de
	I0222 20:45:46.139543    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.139547    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.139552    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.139556    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.139561    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.139650    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:46.640023    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:46.640045    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.640052    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.640058    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.642847    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:46.642878    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.642902    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.642915    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.642926    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.642951    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.642985    8582 round_trippers.go:580]     Audit-Id: 05ebcc17-1838-4e58-82e7-bb5f19bcd5a7
	I0222 20:45:46.643001    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.643690    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0222 20:45:46.644241    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:46.644249    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.644257    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.644265    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.647661    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:46.647676    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.647683    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.647691    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.647698    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.647705    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.647712    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.647718    8582 round_trippers.go:580]     Audit-Id: ea628f87-01ea-40ee-a670-8c3b3915e5ea
	I0222 20:45:46.648144    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:47.140073    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:47.140099    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:47.140147    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:47.140165    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:47.143559    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:47.143569    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:47.143575    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:47.143580    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:47.143586    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:47 GMT
	I0222 20:45:47.143591    8582 round_trippers.go:580]     Audit-Id: 734c800c-2176-498b-8fc1-2f1161d9cff5
	I0222 20:45:47.143596    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:47.143601    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:47.143671    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0222 20:45:47.143954    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:47.143961    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:47.143967    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:47.143973    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:47.146088    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:47.146097    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:47.146106    8582 round_trippers.go:580]     Audit-Id: 3176398c-0784-49d6-b926-578bd3a67013
	I0222 20:45:47.146112    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:47.146117    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:47.146122    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:47.146127    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:47.146132    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:47 GMT
	I0222 20:45:47.146199    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:47.640067    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:47.640079    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:47.640087    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:47.640093    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:47.644743    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:47.644763    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:47.644773    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:47.644781    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:47 GMT
	I0222 20:45:47.644793    8582 round_trippers.go:580]     Audit-Id: f476c7a2-5334-4344-8743-7b08cd258212
	I0222 20:45:47.644819    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:47.644837    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:47.644862    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:47.645938    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0222 20:45:47.646368    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:47.646377    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:47.646384    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:47.646390    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:47.649640    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:47.649659    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:47.649669    8582 round_trippers.go:580]     Audit-Id: e37bb264-5ac4-4b01-81cf-88d5d955268c
	I0222 20:45:47.649678    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:47.649689    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:47.649698    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:47.649721    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:47.649781    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:47 GMT
	I0222 20:45:47.649894    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:48.140047    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:48.140061    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:48.140068    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:48.140073    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:48.144873    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:48.144892    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:48.144900    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:48.144905    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:48.144910    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:48.144915    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:48.144920    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:48 GMT
	I0222 20:45:48.144925    8582 round_trippers.go:580]     Audit-Id: 0a22433c-d6ba-4883-9080-1f85496f5899
	I0222 20:45:48.144996    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:48.145282    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:48.145289    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:48.145295    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:48.145300    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:48.147529    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:48.147539    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:48.147547    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:48 GMT
	I0222 20:45:48.147553    8582 round_trippers.go:580]     Audit-Id: 3cc2ed8c-25e0-43c4-bf4d-cbfcc31c44d5
	I0222 20:45:48.147559    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:48.147564    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:48.147569    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:48.147575    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:48.147839    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:48.148072    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:48.639988    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:48.640004    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:48.640011    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:48.640017    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:48.642844    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:48.642858    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:48.642871    8582 round_trippers.go:580]     Audit-Id: cb6099cb-4417-4df1-a82e-2d352ceb186b
	I0222 20:45:48.642877    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:48.642883    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:48.642889    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:48.642894    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:48.642900    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:48 GMT
	I0222 20:45:48.642992    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:48.643332    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:48.643343    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:48.643352    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:48.643359    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:48.646131    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:48.646144    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:48.646150    8582 round_trippers.go:580]     Audit-Id: 40601f97-c81e-471d-9c5c-1e57768a6604
	I0222 20:45:48.646154    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:48.646159    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:48.646164    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:48.646169    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:48.646177    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:48 GMT
	I0222 20:45:48.646318    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:49.140798    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:49.140823    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:49.140836    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:49.140847    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:49.144682    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:49.144703    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:49.144714    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:49.144720    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:49.144726    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:49 GMT
	I0222 20:45:49.144732    8582 round_trippers.go:580]     Audit-Id: c7742e65-0312-4ba6-a36b-51642ef4c9e2
	I0222 20:45:49.144739    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:49.144744    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:49.144861    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:49.145138    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:49.145146    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:49.145152    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:49.145159    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:49.147541    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:49.147550    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:49.147556    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:49.147561    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:49.147566    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:49.147571    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:49.147576    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:49 GMT
	I0222 20:45:49.147581    8582 round_trippers.go:580]     Audit-Id: cd3870c9-b398-4ab4-9270-8264a4a8781f
	I0222 20:45:49.147639    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:49.641195    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:49.641211    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:49.641220    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:49.641227    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:49.644399    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:49.644419    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:49.644427    8582 round_trippers.go:580]     Audit-Id: 4cba9a74-2618-4c3c-9e93-80bef23ee618
	I0222 20:45:49.644432    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:49.644437    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:49.644442    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:49.644449    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:49.644458    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:49 GMT
	I0222 20:45:49.644545    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:49.644827    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:49.644834    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:49.644840    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:49.644846    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:49.646864    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:49.646875    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:49.646885    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:49.646891    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:49 GMT
	I0222 20:45:49.646896    8582 round_trippers.go:580]     Audit-Id: db72863c-093a-4398-b0e4-7ac468ffd6f8
	I0222 20:45:49.646901    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:49.646906    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:49.646911    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:49.646992    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:50.140677    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:50.140702    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:50.140799    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:50.140814    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:50.145454    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:50.145467    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:50.145473    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:50.145484    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:50.145489    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:50.145495    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:50.145499    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:50 GMT
	I0222 20:45:50.145505    8582 round_trippers.go:580]     Audit-Id: 2cb351e4-aabf-49f8-9655-18f70c7b2a3c
	I0222 20:45:50.145568    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:50.145869    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:50.145875    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:50.145881    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:50.145893    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:50.148111    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:50.148121    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:50.148126    8582 round_trippers.go:580]     Audit-Id: 31f35fb7-7828-4db1-ab65-b1fe682ef8ac
	I0222 20:45:50.148131    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:50.148137    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:50.148141    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:50.148147    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:50.148151    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:50 GMT
	I0222 20:45:50.148210    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:50.148386    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:50.641506    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:50.641526    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:50.641538    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:50.641548    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:50.645827    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:50.645841    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:50.645847    8582 round_trippers.go:580]     Audit-Id: b6bf9ea6-19ea-4be1-b27f-10a22785fc7e
	I0222 20:45:50.645852    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:50.645857    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:50.645863    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:50.645872    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:50.645877    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:50 GMT
	I0222 20:45:50.645969    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:50.646255    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:50.646261    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:50.646268    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:50.646274    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:50.648172    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:45:50.648184    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:50.648191    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:50.648196    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:50 GMT
	I0222 20:45:50.648201    8582 round_trippers.go:580]     Audit-Id: 7968df05-1bf0-4cb4-92c7-54c41245506b
	I0222 20:45:50.648206    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:50.648212    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:50.648217    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:50.648278    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:51.139979    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:51.139992    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:51.140000    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:51.140005    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:51.142879    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:51.142890    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:51.142896    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:51.142901    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:51.142906    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:51.142911    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:51 GMT
	I0222 20:45:51.142916    8582 round_trippers.go:580]     Audit-Id: 21cdef4b-1a9a-44c3-b3ff-976fecfa1633
	I0222 20:45:51.142921    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:51.142988    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:51.143256    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:51.143262    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:51.143267    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:51.143273    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:51.145289    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:51.145300    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:51.145306    8582 round_trippers.go:580]     Audit-Id: 1b62ad60-f08e-4d4d-ba39-02538b179cff
	I0222 20:45:51.145312    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:51.145318    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:51.145323    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:51.145328    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:51.145332    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:51 GMT
	I0222 20:45:51.145386    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:51.640521    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:51.640542    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:51.640554    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:51.640569    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:51.644804    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:51.644819    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:51.644825    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:51.644834    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:51 GMT
	I0222 20:45:51.644839    8582 round_trippers.go:580]     Audit-Id: cf595fd9-9d46-4a6b-947d-288fc8a55947
	I0222 20:45:51.644843    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:51.644848    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:51.644852    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:51.644913    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:51.645206    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:51.645212    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:51.645218    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:51.645224    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:51.647693    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:51.647703    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:51.647709    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:51.647713    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:51.647718    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:51.647724    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:51.647729    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:51 GMT
	I0222 20:45:51.647734    8582 round_trippers.go:580]     Audit-Id: f104b9b8-5073-4512-b09b-4cf613e59bbc
	I0222 20:45:51.647792    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:52.140140    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:52.140153    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:52.140159    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:52.140164    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:52.143150    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:52.143164    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:52.143173    8582 round_trippers.go:580]     Audit-Id: 1072dd5e-a5bc-4249-9931-3948bd97a535
	I0222 20:45:52.143179    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:52.143184    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:52.143191    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:52.143198    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:52.143205    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:52 GMT
	I0222 20:45:52.143442    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:52.143721    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:52.143727    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:52.143733    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:52.143738    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:52.146009    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:52.146020    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:52.146028    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:52 GMT
	I0222 20:45:52.146035    8582 round_trippers.go:580]     Audit-Id: 79e6e7c7-8e43-406f-a996-d5eb26e374a0
	I0222 20:45:52.146041    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:52.146047    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:52.146052    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:52.146057    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:52.146126    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:52.641154    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:52.641166    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:52.641173    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:52.641178    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:52.643969    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:52.643979    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:52.643985    8582 round_trippers.go:580]     Audit-Id: 487e4bd3-7c15-4850-8ca9-410deef08a9f
	I0222 20:45:52.643990    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:52.643995    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:52.644000    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:52.644005    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:52.644010    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:52 GMT
	I0222 20:45:52.644231    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:52.644522    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:52.644528    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:52.644534    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:52.644540    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:52.646899    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:52.646909    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:52.646914    8582 round_trippers.go:580]     Audit-Id: 3f463525-4185-48d7-b980-519a6a2c4e42
	I0222 20:45:52.646919    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:52.646924    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:52.646929    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:52.646934    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:52.646940    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:52 GMT
	I0222 20:45:52.647005    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:52.647291    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:53.140115    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:53.140128    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:53.140135    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:53.140143    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:53.143063    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:53.143076    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:53.143082    8582 round_trippers.go:580]     Audit-Id: 6bf7e802-cd5d-4db2-ab6b-c20d33ff04e0
	I0222 20:45:53.143094    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:53.143099    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:53.143104    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:53.143109    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:53.143114    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:53 GMT
	I0222 20:45:53.143183    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:53.143518    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:53.143526    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:53.143532    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:53.143537    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:53.146495    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:53.146506    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:53.146512    8582 round_trippers.go:580]     Audit-Id: ec00560f-c828-4cb4-bb4e-864dba4ce460
	I0222 20:45:53.146517    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:53.146530    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:53.146536    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:53.146541    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:53.146546    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:53 GMT
	I0222 20:45:53.146670    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:53.639965    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:53.639984    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:53.639993    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:53.640001    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:53.642894    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:53.642909    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:53.642923    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:53 GMT
	I0222 20:45:53.642932    8582 round_trippers.go:580]     Audit-Id: cf9ae1c0-10f3-4598-a024-b7a098024171
	I0222 20:45:53.642939    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:53.642947    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:53.642958    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:53.642982    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:53.643718    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:53.644206    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:53.644213    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:53.644220    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:53.644226    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:53.646715    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:53.646729    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:53.646736    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:53.646743    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:53.646751    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:53 GMT
	I0222 20:45:53.646763    8582 round_trippers.go:580]     Audit-Id: 09bc7ec7-e54c-4c2e-8780-9cba4be468ac
	I0222 20:45:53.646771    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:53.646777    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:53.646976    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:54.139974    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:54.140011    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:54.140070    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:54.140079    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:54.143089    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:54.143102    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:54.143108    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:54.143113    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:54.143118    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:54 GMT
	I0222 20:45:54.143123    8582 round_trippers.go:580]     Audit-Id: cb56166b-e6f5-4bb0-85e5-839ba3dfc7ca
	I0222 20:45:54.143128    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:54.143135    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:54.143198    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:54.143511    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:54.143518    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:54.143524    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:54.143530    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:54.146122    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:54.146138    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:54.146146    8582 round_trippers.go:580]     Audit-Id: ea24488e-26e8-4eb1-9dd6-eb19e41ef545
	I0222 20:45:54.146153    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:54.146175    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:54.146188    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:54.146199    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:54.146205    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:54 GMT
	I0222 20:45:54.146319    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:54.639923    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:54.639939    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:54.639945    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:54.639952    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:54.643433    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:54.643445    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:54.643451    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:54 GMT
	I0222 20:45:54.643461    8582 round_trippers.go:580]     Audit-Id: 9fd5220f-5b7f-476a-91a8-69ca81236f30
	I0222 20:45:54.643467    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:54.643472    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:54.643477    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:54.643482    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:54.643552    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:54.643886    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:54.643894    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:54.643902    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:54.643910    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:54.646104    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:54.646117    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:54.646123    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:54.646139    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:54.646147    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:54 GMT
	I0222 20:45:54.646155    8582 round_trippers.go:580]     Audit-Id: c9868175-a95b-41b4-8e80-a4d1e58a7f18
	I0222 20:45:54.646163    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:54.646203    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:54.646411    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:55.140021    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:55.140035    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:55.140043    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:55.140048    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:55.142834    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:55.142850    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:55.142857    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:55.142863    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:55 GMT
	I0222 20:45:55.142870    8582 round_trippers.go:580]     Audit-Id: 1211f572-a9be-4a51-a55c-a4ba504587a1
	I0222 20:45:55.142877    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:55.142889    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:55.142900    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:55.143034    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:55.143365    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:55.143372    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:55.143378    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:55.143386    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:55.146228    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:55.146239    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:55.146246    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:55.146251    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:55 GMT
	I0222 20:45:55.146255    8582 round_trippers.go:580]     Audit-Id: 36b34839-3ee1-47cc-9b9c-ab7a03fa563a
	I0222 20:45:55.146261    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:55.146265    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:55.146271    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:55.146362    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:55.146560    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:55.640075    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:55.640088    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:55.640098    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:55.640104    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:55.643028    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:55.643055    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:55.643078    8582 round_trippers.go:580]     Audit-Id: 25244bc4-a078-4bab-8d55-e11eb15cfaf0
	I0222 20:45:55.643092    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:55.643104    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:55.643119    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:55.643130    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:55.643138    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:55 GMT
	I0222 20:45:55.643220    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:55.643533    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:55.643540    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:55.643546    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:55.643551    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:55.645884    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:55.645893    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:55.645899    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:55.645904    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:55 GMT
	I0222 20:45:55.645909    8582 round_trippers.go:580]     Audit-Id: 36c0a7de-850c-4597-9f67-8c0ba2527d42
	I0222 20:45:55.645918    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:55.645923    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:55.645928    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:55.645986    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:56.139865    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:56.139881    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:56.139890    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:56.139895    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:56.143181    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:56.143195    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:56.143201    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:56.143206    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:56 GMT
	I0222 20:45:56.143211    8582 round_trippers.go:580]     Audit-Id: 435d589e-7201-400d-b2f0-bdd7f57246b7
	I0222 20:45:56.143235    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:56.143241    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:56.143245    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:56.143312    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:56.143603    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:56.143610    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:56.143616    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:56.143621    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:56.146004    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:56.146023    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:56.146032    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:56.146042    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:56.146050    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:56 GMT
	I0222 20:45:56.146059    8582 round_trippers.go:580]     Audit-Id: 38046551-3dc8-4ac7-8c3a-9ae71e543ca0
	I0222 20:45:56.146067    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:56.146076    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:56.146201    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:56.639923    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:56.639938    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:56.639946    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:56.639954    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:56.642916    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:56.642930    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:56.642935    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:56.642940    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:56.642945    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:56.642950    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:56 GMT
	I0222 20:45:56.642957    8582 round_trippers.go:580]     Audit-Id: c8fa652c-eb98-4324-99ab-9af2ea295340
	I0222 20:45:56.642965    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:56.643065    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:56.643382    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:56.643389    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:56.643396    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:56.643401    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:56.645816    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:56.645827    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:56.645834    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:56 GMT
	I0222 20:45:56.645839    8582 round_trippers.go:580]     Audit-Id: f0c8b616-c1c7-430e-be79-c5270a63325b
	I0222 20:45:56.645844    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:56.645849    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:56.645854    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:56.645859    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:56.645940    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:57.139921    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:57.139940    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:57.139949    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:57.139958    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:57.142895    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:57.142910    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:57.142918    8582 round_trippers.go:580]     Audit-Id: 3cf66bfd-9dc3-4834-9056-ec38cc143b98
	I0222 20:45:57.142926    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:57.142933    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:57.142940    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:57.142948    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:57.142953    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:57 GMT
	I0222 20:45:57.143028    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:57.143406    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:57.143414    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:57.143421    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:57.143432    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:57.146610    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:57.146621    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:57.146627    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:57.146633    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:57 GMT
	I0222 20:45:57.146638    8582 round_trippers.go:580]     Audit-Id: 2174f350-326d-4e0a-aa37-bcfb028be85a
	I0222 20:45:57.146643    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:57.146648    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:57.146653    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:57.146708    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:57.146890    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:57.640088    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:57.640102    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:57.640109    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:57.640114    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:57.642850    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:57.642865    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:57.642871    8582 round_trippers.go:580]     Audit-Id: c1c0801a-d9e1-40d2-a43c-0d936142c7c7
	I0222 20:45:57.642876    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:57.642881    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:57.642886    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:57.642891    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:57.642896    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:57 GMT
	I0222 20:45:57.642955    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:57.643260    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:57.643268    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:57.643273    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:57.643279    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:57.645570    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:57.645583    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:57.645589    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:57.645597    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:57.645610    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:57.645619    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:57 GMT
	I0222 20:45:57.645628    8582 round_trippers.go:580]     Audit-Id: 772991df-41aa-427f-9449-91c195b727d9
	I0222 20:45:57.645636    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:57.646116    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:58.140078    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:58.140092    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:58.140099    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:58.140104    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:58.143093    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:58.143116    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:58.143128    8582 round_trippers.go:580]     Audit-Id: b06beddb-37f6-4c05-bd55-19c244bdec48
	I0222 20:45:58.143137    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:58.143143    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:58.143147    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:58.143155    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:58.143163    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:58 GMT
	I0222 20:45:58.143242    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:58.143539    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:58.143546    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:58.143552    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:58.143559    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:58.146147    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:58.146157    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:58.146163    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:58.146168    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:58.146188    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:58 GMT
	I0222 20:45:58.146192    8582 round_trippers.go:580]     Audit-Id: 799d5e65-61e5-4cfe-87bf-ec1b954b7be1
	I0222 20:45:58.146220    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:58.146228    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:58.146329    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:58.640268    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:58.640285    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:58.640293    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:58.640299    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:58.643382    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:58.643395    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:58.643401    8582 round_trippers.go:580]     Audit-Id: 85116caf-f691-48e2-8d30-a62e10ccf2d2
	I0222 20:45:58.643405    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:58.643410    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:58.643416    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:58.643423    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:58.643433    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:58 GMT
	I0222 20:45:58.643512    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:58.643871    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:58.643878    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:58.643885    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:58.643890    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:58.646465    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:58.646481    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:58.646490    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:58.646498    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:58 GMT
	I0222 20:45:58.646506    8582 round_trippers.go:580]     Audit-Id: 1c70a520-2c51-42cb-b0b6-33e4480275a1
	I0222 20:45:58.646512    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:58.646517    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:58.646524    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:58.646811    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:59.139799    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:59.139815    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:59.139822    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:59.139828    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:59.142962    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:59.142981    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:59.142991    8582 round_trippers.go:580]     Audit-Id: 2ced3e19-9b46-4edd-aa75-768063b86b69
	I0222 20:45:59.142999    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:59.143007    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:59.143013    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:59.143025    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:59.143034    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:59 GMT
	I0222 20:45:59.143113    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:59.143469    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:59.143477    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:59.143486    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:59.143494    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:59.146903    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:59.146920    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:59.146931    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:59 GMT
	I0222 20:45:59.146941    8582 round_trippers.go:580]     Audit-Id: 542d2864-26dd-461d-bf54-321063cd896e
	I0222 20:45:59.146954    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:59.146968    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:59.146982    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:59.146993    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:59.147074    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:59.147282    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:59.639875    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:59.639888    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:59.639895    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:59.639900    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:59.642671    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:59.642684    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:59.642690    8582 round_trippers.go:580]     Audit-Id: 2cb6fd90-aba9-42ac-ba1c-d221c9c8e259
	I0222 20:45:59.642697    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:59.642704    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:59.642711    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:59.642718    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:59.642725    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:59 GMT
	I0222 20:45:59.642952    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:59.643346    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:59.643354    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:59.643367    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:59.643375    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:59.646445    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:59.646459    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:59.646467    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:59.646474    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:59 GMT
	I0222 20:45:59.646482    8582 round_trippers.go:580]     Audit-Id: c57b85f4-368f-469e-b6fd-f0e4efd0c942
	I0222 20:45:59.646489    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:59.646496    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:59.646503    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:59.646592    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:00.140030    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:00.140046    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:00.140053    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:00.140058    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:00.143022    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:00.143033    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:00.143039    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:00.143045    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:00.143058    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:00 GMT
	I0222 20:46:00.143071    8582 round_trippers.go:580]     Audit-Id: 08672f7a-b6eb-4bf4-a938-b407b84353c8
	I0222 20:46:00.143077    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:00.143082    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:00.143169    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:46:00.143467    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:00.143474    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:00.143480    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:00.143485    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:00.146328    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:00.146341    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:00.146346    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:00.146352    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:00.146357    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:00.146361    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:00 GMT
	I0222 20:46:00.146369    8582 round_trippers.go:580]     Audit-Id: 2c483815-1256-4385-9c3e-e63648997f6e
	I0222 20:46:00.146374    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:00.146445    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:00.639989    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:00.640002    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:00.640008    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:00.640014    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:00.643297    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:00.643324    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:00.643350    8582 round_trippers.go:580]     Audit-Id: 1641eec4-c118-418a-b4c4-159e03fff41e
	I0222 20:46:00.643355    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:00.643360    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:00.643380    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:00.643384    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:00.643404    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:00 GMT
	I0222 20:46:00.643470    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:46:00.643779    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:00.643785    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:00.643790    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:00.643804    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:00.646032    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:00.646041    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:00.646047    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:00.646051    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:00.646057    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:00.646062    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:00.646067    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:00 GMT
	I0222 20:46:00.646073    8582 round_trippers.go:580]     Audit-Id: 89744106-5d95-4394-bd05-23d88939f863
	I0222 20:46:00.646130    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.141682    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:01.141707    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.141760    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.141776    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.146335    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:46:01.146354    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.146360    8582 round_trippers.go:580]     Audit-Id: c7f5154d-05d0-455f-8415-8152af6bbeea
	I0222 20:46:01.146366    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.146371    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.146376    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.146381    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.146386    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.146446    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:46:01.146739    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.146746    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.146754    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.146764    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.149170    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.149179    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.149185    8582 round_trippers.go:580]     Audit-Id: 6a0e6f92-b743-4628-92ec-34cda14d2195
	I0222 20:46:01.149190    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.149196    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.149202    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.149207    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.149211    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.149261    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.149433    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:46:01.639921    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:01.639936    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.639945    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.639952    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.643195    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:01.643207    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.643212    8582 round_trippers.go:580]     Audit-Id: be74f7da-9a56-43f7-abd5-8953c4c3e7e4
	I0222 20:46:01.643217    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.643221    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.643226    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.643231    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.643236    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.643297    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0222 20:46:01.643571    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.643577    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.643583    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.643588    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.645797    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.645807    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.645812    8582 round_trippers.go:580]     Audit-Id: ccee74ee-d1d3-4992-910e-5344d050eda6
	I0222 20:46:01.645818    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.645823    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.645828    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.645834    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.645838    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.645891    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.646072    8582 pod_ready.go:92] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.646082    8582 pod_ready.go:81] duration metric: took 15.515560245s waiting for pod "coredns-787d4945fb-48v9r" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.646101    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-j4pt7" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.646135    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-j4pt7
	I0222 20:46:01.646141    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.646149    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.646155    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.647960    8582 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0222 20:46:01.647969    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.647975    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.647980    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.647986    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.647991    8582 round_trippers.go:580]     Content-Length: 216
	I0222 20:46:01.647997    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.648002    8582 round_trippers.go:580]     Audit-Id: 9a9111b3-2786-486c-8ea9-1285ebd6f435
	I0222 20:46:01.648007    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.648018    8582 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-j4pt7\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-j4pt7","kind":"pods"},"code":404}
	I0222 20:46:01.648140    8582 pod_ready.go:97] error getting pod "coredns-787d4945fb-j4pt7" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-j4pt7" not found
	I0222 20:46:01.648147    8582 pod_ready.go:81] duration metric: took 2.039438ms waiting for pod "coredns-787d4945fb-j4pt7" in "kube-system" namespace to be "Ready" ...
	E0222 20:46:01.648153    8582 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-j4pt7" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-j4pt7" not found
	I0222 20:46:01.648158    8582 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.648191    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/etcd-multinode-216000
	I0222 20:46:01.648195    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.648202    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.648208    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.650179    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:01.650189    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.650195    8582 round_trippers.go:580]     Audit-Id: 957f7af0-2e7f-4eb9-93b7-2603fff7327b
	I0222 20:46:01.650200    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.650205    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.650210    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.650215    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.650220    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.650274    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-216000","namespace":"kube-system","uid":"c2b06896-f123-48bd-8603-0d7493488f5c","resourceVersion":"389","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2d051eb8eb3728481071a1fb944f8fb9","kubernetes.io/config.mirror":"2d051eb8eb3728481071a1fb944f8fb9","kubernetes.io/config.seen":"2023-02-23T04:45:32.257428627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0222 20:46:01.650488    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.650494    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.650500    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.650505    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.652601    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.652611    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.652616    8582 round_trippers.go:580]     Audit-Id: 3aee6e68-170a-4a56-957c-a1ad67425c49
	I0222 20:46:01.652624    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.652629    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.652634    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.652640    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.652645    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.652697    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.652858    8582 pod_ready.go:92] pod "etcd-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.652863    8582 pod_ready.go:81] duration metric: took 4.701382ms waiting for pod "etcd-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.652871    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.652895    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-216000
	I0222 20:46:01.652899    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.652905    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.652910    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.655186    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.655194    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.655200    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.655205    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.655210    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.655217    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.655223    8582 round_trippers.go:580]     Audit-Id: e0883865-49a8-4840-ac42-9b94db300e58
	I0222 20:46:01.655227    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.655288    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-216000","namespace":"kube-system","uid":"a28861be-afed-4463-a3c0-e438a5122dc8","resourceVersion":"276","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3327d28d34b6df60d7e253c5892d1f22","kubernetes.io/config.mirror":"3327d28d34b6df60d7e253c5892d1f22","kubernetes.io/config.seen":"2023-02-23T04:45:32.257429393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0222 20:46:01.655541    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.655547    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.655552    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.655559    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.657527    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:01.657536    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.657541    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.657546    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.657551    8582 round_trippers.go:580]     Audit-Id: 82a6e392-e7b5-4d15-bb68-f62e42301358
	I0222 20:46:01.657556    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.657561    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.657566    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.657611    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.657784    8582 pod_ready.go:92] pod "kube-apiserver-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.657792    8582 pod_ready.go:81] duration metric: took 4.913891ms waiting for pod "kube-apiserver-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.657797    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.657823    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-216000
	I0222 20:46:01.657828    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.657833    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.657839    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.659962    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.659971    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.659976    8582 round_trippers.go:580]     Audit-Id: 20530d6e-999d-4b83-9fa1-08eeb6484a0e
	I0222 20:46:01.659981    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.659987    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.659991    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.659997    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.660002    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.660083    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-216000","namespace":"kube-system","uid":"a851a311-37aa-46d5-9152-a95acbbc88ec","resourceVersion":"272","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e1371af7f33022153b0d8ba7783d4fc9","kubernetes.io/config.mirror":"e1371af7f33022153b0d8ba7783d4fc9","kubernetes.io/config.seen":"2023-02-23T04:45:32.257424246Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0222 20:46:01.660320    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.660325    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.660331    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.660338    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.662376    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.662385    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.662391    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.662396    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.662402    8582 round_trippers.go:580]     Audit-Id: 6595d8d4-0ff8-4b38-ad13-d168e2dcb100
	I0222 20:46:01.662407    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.662412    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.662417    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.662459    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.662618    8582 pod_ready.go:92] pod "kube-controller-manager-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.662624    8582 pod_ready.go:81] duration metric: took 4.821724ms waiting for pod "kube-controller-manager-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.662629    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgxrw" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.662659    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-fgxrw
	I0222 20:46:01.662664    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.662669    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.662675    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.664591    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:01.664601    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.664607    8582 round_trippers.go:580]     Audit-Id: 84e8c5b6-a703-48fe-b328-61e5e74b1a63
	I0222 20:46:01.664612    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.664618    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.664623    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.664627    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.664632    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.664687    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fgxrw","generateName":"kube-proxy-","namespace":"kube-system","uid":"7402cf62-2944-469b-9c38-0447377d4579","resourceVersion":"393","creationTimestamp":"2023-02-23T04:45:44Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0222 20:46:01.840015    8582 request.go:622] Waited for 175.056702ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.840043    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.840047    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.840053    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.840060    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.842817    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.842827    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.842833    8582 round_trippers.go:580]     Audit-Id: 5c6904ba-5404-4f47-8f45-fa0ec8a99bee
	I0222 20:46:01.842838    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.842843    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.842848    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.842853    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.842858    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.842917    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.843098    8582 pod_ready.go:92] pod "kube-proxy-fgxrw" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.843104    8582 pod_ready.go:81] duration metric: took 180.472895ms waiting for pod "kube-proxy-fgxrw" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.843110    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:02.041988    8582 request.go:622] Waited for 198.823853ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-216000
	I0222 20:46:02.042146    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-216000
	I0222 20:46:02.042158    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.042169    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.042182    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.047502    8582 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0222 20:46:02.047521    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.047527    8582 round_trippers.go:580]     Audit-Id: 0843bfe4-22ad-4594-a36c-ba20edc80e7c
	I0222 20:46:02.047532    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.047536    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.047541    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.047546    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.047551    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.047614    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-216000","namespace":"kube-system","uid":"a77cec17-0ffa-4b1b-91b0-aa6367fc7848","resourceVersion":"270","creationTimestamp":"2023-02-23T04:45:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e812827214b9960209c3ba4dcd668c3","kubernetes.io/config.mirror":"0e812827214b9960209c3ba4dcd668c3","kubernetes.io/config.seen":"2023-02-23T04:45:22.142158982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0222 20:46:02.241177    8582 request.go:622] Waited for 193.17144ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:02.241228    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:02.241235    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.241247    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.241257    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.245229    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:02.245244    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.245252    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.245259    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.245266    8582 round_trippers.go:580]     Audit-Id: ad9e33d1-828e-440f-b4f6-e72f827fe347
	I0222 20:46:02.245273    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.245279    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.245286    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.245385    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:02.245597    8582 pod_ready.go:92] pod "kube-scheduler-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:02.245603    8582 pod_ready.go:81] duration metric: took 402.491987ms waiting for pod "kube-scheduler-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:02.245610    8582 pod_ready.go:38] duration metric: took 16.125498934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 20:46:02.245625    8582 api_server.go:51] waiting for apiserver process to appear ...
	I0222 20:46:02.245687    8582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 20:46:02.255118    8582 command_runner.go:130] > 1920
	I0222 20:46:02.255764    8582 api_server.go:71] duration metric: took 16.816800107s to wait for apiserver process to appear ...
	I0222 20:46:02.255774    8582 api_server.go:87] waiting for apiserver healthz status ...
	I0222 20:46:02.255785    8582 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51085/healthz ...
	I0222 20:46:02.261002    8582 api_server.go:278] https://127.0.0.1:51085/healthz returned 200:
	ok
	I0222 20:46:02.261035    8582 round_trippers.go:463] GET https://127.0.0.1:51085/version
	I0222 20:46:02.261039    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.261047    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.261053    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.262305    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:02.262314    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.262319    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.262325    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.262333    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.262338    8582 round_trippers.go:580]     Content-Length: 263
	I0222 20:46:02.262343    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.262347    8582 round_trippers.go:580]     Audit-Id: 940d1402-f44f-4fea-89fc-74b1769b4bd3
	I0222 20:46:02.262353    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.262362    8582 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0222 20:46:02.262410    8582 api_server.go:140] control plane version: v1.26.1
	I0222 20:46:02.262416    8582 api_server.go:130] duration metric: took 6.639194ms to wait for apiserver health ...
	I0222 20:46:02.262420    8582 system_pods.go:43] waiting for kube-system pods to appear ...
	I0222 20:46:02.440157    8582 request.go:622] Waited for 177.694516ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:02.440199    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:02.440209    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.440277    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.440286    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.444553    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:46:02.444566    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.444576    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.444581    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.444586    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.444591    8582 round_trippers.go:580]     Audit-Id: 9502ef66-3432-4eb2-9ac0-475cbd92a774
	I0222 20:46:02.444597    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.444601    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.445075    8582 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0222 20:46:02.446384    8582 system_pods.go:59] 8 kube-system pods found
	I0222 20:46:02.446394    8582 system_pods.go:61] "coredns-787d4945fb-48v9r" [e6f820e8-bc10-4500-8a19-17a16c982d46] Running
	I0222 20:46:02.446398    8582 system_pods.go:61] "etcd-multinode-216000" [c2b06896-f123-48bd-8603-0d7493488f5c] Running
	I0222 20:46:02.446402    8582 system_pods.go:61] "kindnet-m7gzm" [16c4431b-9696-442c-bcd2-626629a1cb64] Running
	I0222 20:46:02.446406    8582 system_pods.go:61] "kube-apiserver-multinode-216000" [a28861be-afed-4463-a3c0-e438a5122dc8] Running
	I0222 20:46:02.446412    8582 system_pods.go:61] "kube-controller-manager-multinode-216000" [a851a311-37aa-46d5-9152-a95acbbc88ec] Running
	I0222 20:46:02.446416    8582 system_pods.go:61] "kube-proxy-fgxrw" [7402cf62-2944-469b-9c38-0447377d4579] Running
	I0222 20:46:02.446421    8582 system_pods.go:61] "kube-scheduler-multinode-216000" [a77cec17-0ffa-4b1b-91b0-aa6367fc7848] Running
	I0222 20:46:02.446424    8582 system_pods.go:61] "storage-provisioner" [9540d868-f1fc-476f-8ebd-f4f5ac9bebac] Running
	I0222 20:46:02.446428    8582 system_pods.go:74] duration metric: took 184.006753ms to wait for pod list to return data ...
	I0222 20:46:02.446437    8582 default_sa.go:34] waiting for default service account to be created ...
	I0222 20:46:02.640074    8582 request.go:622] Waited for 193.587029ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/default/serviceaccounts
	I0222 20:46:02.640169    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/default/serviceaccounts
	I0222 20:46:02.640180    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.640192    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.640204    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.643643    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:02.643653    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.643658    8582 round_trippers.go:580]     Audit-Id: ebe51535-faea-48b2-8c93-8c35a6c16e5f
	I0222 20:46:02.643663    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.643668    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.643673    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.643679    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.643689    8582 round_trippers.go:580]     Content-Length: 261
	I0222 20:46:02.643694    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.643707    8582 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"53d67d08-9409-45ee-aadf-89048d75e48e","resourceVersion":"304","creationTimestamp":"2023-02-23T04:45:44Z"}}]}
	I0222 20:46:02.643813    8582 default_sa.go:45] found service account: "default"
	I0222 20:46:02.643819    8582 default_sa.go:55] duration metric: took 197.380215ms for default service account to be created ...
	I0222 20:46:02.643828    8582 system_pods.go:116] waiting for k8s-apps to be running ...
	I0222 20:46:02.841986    8582 request.go:622] Waited for 198.120655ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:02.842135    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:02.842147    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.842160    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.842171    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.847486    8582 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0222 20:46:02.847502    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.847508    8582 round_trippers.go:580]     Audit-Id: b96b3b16-8ba5-4a64-9795-9d9b3d0cd8f8
	I0222 20:46:02.847513    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.847520    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.847527    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.847538    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.847544    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.848654    8582 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0222 20:46:02.849926    8582 system_pods.go:86] 8 kube-system pods found
	I0222 20:46:02.849935    8582 system_pods.go:89] "coredns-787d4945fb-48v9r" [e6f820e8-bc10-4500-8a19-17a16c982d46] Running
	I0222 20:46:02.849941    8582 system_pods.go:89] "etcd-multinode-216000" [c2b06896-f123-48bd-8603-0d7493488f5c] Running
	I0222 20:46:02.849945    8582 system_pods.go:89] "kindnet-m7gzm" [16c4431b-9696-442c-bcd2-626629a1cb64] Running
	I0222 20:46:02.849949    8582 system_pods.go:89] "kube-apiserver-multinode-216000" [a28861be-afed-4463-a3c0-e438a5122dc8] Running
	I0222 20:46:02.849953    8582 system_pods.go:89] "kube-controller-manager-multinode-216000" [a851a311-37aa-46d5-9152-a95acbbc88ec] Running
	I0222 20:46:02.849957    8582 system_pods.go:89] "kube-proxy-fgxrw" [7402cf62-2944-469b-9c38-0447377d4579] Running
	I0222 20:46:02.849962    8582 system_pods.go:89] "kube-scheduler-multinode-216000" [a77cec17-0ffa-4b1b-91b0-aa6367fc7848] Running
	I0222 20:46:02.849966    8582 system_pods.go:89] "storage-provisioner" [9540d868-f1fc-476f-8ebd-f4f5ac9bebac] Running
	I0222 20:46:02.849970    8582 system_pods.go:126] duration metric: took 206.141147ms to wait for k8s-apps to be running ...
	I0222 20:46:02.849979    8582 system_svc.go:44] waiting for kubelet service to be running ....
	I0222 20:46:02.850037    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:46:02.859842    8582 system_svc.go:56] duration metric: took 9.86221ms WaitForService to wait for kubelet.
	I0222 20:46:02.859855    8582 kubeadm.go:578] duration metric: took 17.420896974s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0222 20:46:02.859869    8582 node_conditions.go:102] verifying NodePressure condition ...
	I0222 20:46:03.039987    8582 request.go:622] Waited for 180.026741ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes
	I0222 20:46:03.040032    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes
	I0222 20:46:03.040036    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:03.040043    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:03.040049    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:03.042481    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:03.042492    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:03.042497    8582 round_trippers.go:580]     Audit-Id: f96fdf85-a7cd-4e39-9dd6-0fb7d3be5def
	I0222 20:46:03.042502    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:03.042508    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:03.042517    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:03.042522    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:03.042527    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:03 GMT
	I0222 20:46:03.042585    8582 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5005 chars]
	I0222 20:46:03.042807    8582 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 20:46:03.042819    8582 node_conditions.go:123] node cpu capacity is 6
	I0222 20:46:03.042829    8582 node_conditions.go:105] duration metric: took 182.957935ms to run NodePressure ...
	I0222 20:46:03.042837    8582 start.go:228] waiting for startup goroutines ...
	I0222 20:46:03.042843    8582 start.go:233] waiting for cluster config update ...
	I0222 20:46:03.042870    8582 start.go:242] writing updated cluster config ...
	I0222 20:46:03.063512    8582 out.go:177] 
	I0222 20:46:03.101815    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:46:03.101927    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:46:03.124498    8582 out.go:177] * Starting worker node multinode-216000-m02 in cluster multinode-216000
	I0222 20:46:03.167345    8582 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 20:46:03.188383    8582 out.go:177] * Pulling base image ...
	I0222 20:46:03.231238    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:46:03.231300    8582 cache.go:57] Caching tarball of preloaded images
	I0222 20:46:03.231301    8582 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 20:46:03.231472    8582 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 20:46:03.231489    8582 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0222 20:46:03.231597    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:46:03.288231    8582 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 20:46:03.288254    8582 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 20:46:03.288286    8582 cache.go:193] Successfully downloaded all kic artifacts
	I0222 20:46:03.288317    8582 start.go:364] acquiring machines lock for multinode-216000-m02: {Name:mk771672be864b661a9d3157699d8a2299fad1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 20:46:03.288470    8582 start.go:368] acquired machines lock for "multinode-216000-m02" in 142.417µs
	I0222 20:46:03.288496    8582 start.go:93] Provisioning new machine with config: &{Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0222 20:46:03.288583    8582 start.go:125] createHost starting for "m02" (driver="docker")
	I0222 20:46:03.310228    8582 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0222 20:46:03.310404    8582 start.go:159] libmachine.API.Create for "multinode-216000" (driver="docker")
	I0222 20:46:03.310437    8582 client.go:168] LocalClient.Create starting
	I0222 20:46:03.310589    8582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem
	I0222 20:46:03.310665    8582 main.go:141] libmachine: Decoding PEM data...
	I0222 20:46:03.310690    8582 main.go:141] libmachine: Parsing certificate...
	I0222 20:46:03.310783    8582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem
	I0222 20:46:03.310833    8582 main.go:141] libmachine: Decoding PEM data...
	I0222 20:46:03.310852    8582 main.go:141] libmachine: Parsing certificate...
	I0222 20:46:03.331516    8582 cli_runner.go:164] Run: docker network inspect multinode-216000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0222 20:46:03.388081    8582 network_create.go:76] Found existing network {name:multinode-216000 subnet:0xc0004de2d0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0222 20:46:03.388129    8582 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-216000-m02" container
	I0222 20:46:03.388257    8582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0222 20:46:03.447257    8582 cli_runner.go:164] Run: docker volume create multinode-216000-m02 --label name.minikube.sigs.k8s.io=multinode-216000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0222 20:46:03.503476    8582 oci.go:103] Successfully created a docker volume multinode-216000-m02
	I0222 20:46:03.503608    8582 cli_runner.go:164] Run: docker run --rm --name multinode-216000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-216000-m02 --entrypoint /usr/bin/test -v multinode-216000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0222 20:46:03.944305    8582 oci.go:107] Successfully prepared a docker volume multinode-216000-m02
	I0222 20:46:03.944350    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:46:03.944361    8582 kic.go:190] Starting extracting preloaded images to volume ...
	I0222 20:46:03.944492    8582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-216000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0222 20:46:10.327136    8582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-216000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.382633829s)
	I0222 20:46:10.327159    8582 kic.go:199] duration metric: took 6.382869 seconds to extract preloaded images to volume
	I0222 20:46:10.327312    8582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0222 20:46:10.477375    8582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-216000-m02 --name multinode-216000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-216000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-216000-m02 --network multinode-216000 --ip 192.168.58.3 --volume multinode-216000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0222 20:46:10.841012    8582 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Running}}
	I0222 20:46:10.906969    8582 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Status}}
	I0222 20:46:10.974576    8582 cli_runner.go:164] Run: docker exec multinode-216000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0222 20:46:11.082421    8582 oci.go:144] the created container "multinode-216000-m02" has a running status.
	I0222 20:46:11.082546    8582 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa...
	I0222 20:46:11.166692    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0222 20:46:11.166759    8582 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0222 20:46:11.276581    8582 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Status}}
	I0222 20:46:11.339407    8582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0222 20:46:11.339428    8582 kic_runner.go:114] Args: [docker exec --privileged multinode-216000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0222 20:46:11.440252    8582 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Status}}
	I0222 20:46:11.501592    8582 machine.go:88] provisioning docker machine ...
	I0222 20:46:11.501622    8582 ubuntu.go:169] provisioning hostname "multinode-216000-m02"
	I0222 20:46:11.501720    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:11.584080    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:11.584484    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:11.584495    8582 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-216000-m02 && echo "multinode-216000-m02" | sudo tee /etc/hostname
	I0222 20:46:11.728290    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-216000-m02
	
	I0222 20:46:11.728380    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:11.788579    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:11.788947    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:11.788961    8582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-216000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-216000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-216000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 20:46:11.924694    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 20:46:11.924717    8582 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 20:46:11.924725    8582 ubuntu.go:177] setting up certificates
	I0222 20:46:11.924735    8582 provision.go:83] configureAuth start
	I0222 20:46:11.924826    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000-m02
	I0222 20:46:11.984151    8582 provision.go:138] copyHostCerts
	I0222 20:46:11.984200    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:46:11.984260    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 20:46:11.984266    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:46:11.984420    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 20:46:11.984601    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:46:11.984658    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 20:46:11.984664    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:46:11.984739    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 20:46:11.984880    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:46:11.984913    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 20:46:11.984918    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:46:11.984974    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 20:46:11.985100    8582 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.multinode-216000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-216000-m02]
	I0222 20:46:12.504429    8582 provision.go:172] copyRemoteCerts
	I0222 20:46:12.504494    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 20:46:12.504546    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:12.565974    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:12.662253    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0222 20:46:12.662333    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 20:46:12.680393    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0222 20:46:12.680466    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0222 20:46:12.698324    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0222 20:46:12.698403    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0222 20:46:12.716052    8582 provision.go:86] duration metric: configureAuth took 791.317078ms
	I0222 20:46:12.716065    8582 ubuntu.go:193] setting minikube options for container-runtime
	I0222 20:46:12.716211    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:46:12.716281    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:12.775756    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:12.776116    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:12.776127    8582 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 20:46:12.909345    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 20:46:12.909358    8582 ubuntu.go:71] root file system type: overlay
	I0222 20:46:12.909459    8582 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 20:46:12.909536    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:12.968891    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:12.969246    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:12.969301    8582 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 20:46:13.113473    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 20:46:13.113576    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:13.174448    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:13.174806    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:13.174820    8582 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 20:46:13.805977    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 04:46:13.111929614 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0222 20:46:13.806000    8582 machine.go:91] provisioned docker machine in 2.304412699s
	I0222 20:46:13.806006    8582 client.go:171] LocalClient.Create took 10.495682981s
	I0222 20:46:13.806044    8582 start.go:167] duration metric: libmachine.API.Create for "multinode-216000" took 10.495761921s
	I0222 20:46:13.806050    8582 start.go:300] post-start starting for "multinode-216000-m02" (driver="docker")
	I0222 20:46:13.806055    8582 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 20:46:13.806143    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 20:46:13.806200    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:13.867642    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:13.961894    8582 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 20:46:13.966041    8582 command_runner.go:130] > NAME="Ubuntu"
	I0222 20:46:13.966055    8582 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0222 20:46:13.966060    8582 command_runner.go:130] > ID=ubuntu
	I0222 20:46:13.966064    8582 command_runner.go:130] > ID_LIKE=debian
	I0222 20:46:13.966071    8582 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0222 20:46:13.966076    8582 command_runner.go:130] > VERSION_ID="20.04"
	I0222 20:46:13.966081    8582 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0222 20:46:13.966085    8582 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0222 20:46:13.966090    8582 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0222 20:46:13.966102    8582 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0222 20:46:13.966106    8582 command_runner.go:130] > VERSION_CODENAME=focal
	I0222 20:46:13.966110    8582 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0222 20:46:13.966156    8582 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 20:46:13.966167    8582 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 20:46:13.966174    8582 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 20:46:13.966179    8582 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 20:46:13.966185    8582 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 20:46:13.966293    8582 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 20:46:13.966446    8582 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 20:46:13.966452    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /etc/ssl/certs/31332.pem
	I0222 20:46:13.966632    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 20:46:13.974052    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:46:13.992577    8582 start.go:303] post-start completed in 186.521047ms
	I0222 20:46:13.993099    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000-m02
	I0222 20:46:14.052397    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:46:14.052834    8582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 20:46:14.052898    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:14.113123    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:14.206681    8582 command_runner.go:130] > 11%!
	(MISSING)I0222 20:46:14.207077    8582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 20:46:14.211542    8582 command_runner.go:130] > 50G
	I0222 20:46:14.211853    8582 start.go:128] duration metric: createHost completed in 10.923387707s
	I0222 20:46:14.211863    8582 start.go:83] releasing machines lock for "multinode-216000-m02", held for 10.923510699s
	I0222 20:46:14.211944    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000-m02
	I0222 20:46:14.297048    8582 out.go:177] * Found network options:
	I0222 20:46:14.317960    8582 out.go:177]   - NO_PROXY=192.168.58.2
	W0222 20:46:14.339012    8582 proxy.go:119] fail to check proxy env: Error ip not in block
	W0222 20:46:14.339065    8582 proxy.go:119] fail to check proxy env: Error ip not in block
	I0222 20:46:14.339259    8582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 20:46:14.339260    8582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0222 20:46:14.339366    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:14.339385    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:14.410169    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:14.410170    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:14.557593    8582 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0222 20:46:14.557617    8582 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0222 20:46:14.557626    8582 command_runner.go:130] > Device: 100006h/1048582d	Inode: 393237      Links: 1
	I0222 20:46:14.557633    8582 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0222 20:46:14.557641    8582 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0222 20:46:14.557650    8582 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0222 20:46:14.557656    8582 command_runner.go:130] > Change: 2023-02-23 04:22:34.614629251 +0000
	I0222 20:46:14.557661    8582 command_runner.go:130] >  Birth: -
	I0222 20:46:14.557686    8582 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0222 20:46:14.557788    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 20:46:14.580740    8582 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 20:46:14.580837    8582 ssh_runner.go:195] Run: which cri-dockerd
	I0222 20:46:14.584881    8582 command_runner.go:130] > /usr/bin/cri-dockerd
	I0222 20:46:14.585102    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0222 20:46:14.592651    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0222 20:46:14.606220    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0222 20:46:14.621509    8582 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0222 20:46:14.621534    8582 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0222 20:46:14.621545    8582 start.go:485] detecting cgroup driver to use...
	I0222 20:46:14.621560    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:46:14.621657    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:46:14.634292    8582 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0222 20:46:14.634310    8582 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0222 20:46:14.635135    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0222 20:46:14.644120    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 20:46:14.653165    8582 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 20:46:14.653228    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 20:46:14.661973    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:46:14.671091    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 20:46:14.679844    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:46:14.688206    8582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 20:46:14.696454    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 20:46:14.705060    8582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 20:46:14.712051    8582 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0222 20:46:14.712704    8582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 20:46:14.719895    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:46:14.785817    8582 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 20:46:14.862698    8582 start.go:485] detecting cgroup driver to use...
	I0222 20:46:14.862720    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:46:14.862812    8582 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 20:46:14.874159    8582 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0222 20:46:14.874170    8582 command_runner.go:130] > [Unit]
	I0222 20:46:14.874178    8582 command_runner.go:130] > Description=Docker Application Container Engine
	I0222 20:46:14.874183    8582 command_runner.go:130] > Documentation=https://docs.docker.com
	I0222 20:46:14.874187    8582 command_runner.go:130] > BindsTo=containerd.service
	I0222 20:46:14.874192    8582 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0222 20:46:14.874198    8582 command_runner.go:130] > Wants=network-online.target
	I0222 20:46:14.874203    8582 command_runner.go:130] > Requires=docker.socket
	I0222 20:46:14.874206    8582 command_runner.go:130] > StartLimitBurst=3
	I0222 20:46:14.874217    8582 command_runner.go:130] > StartLimitIntervalSec=60
	I0222 20:46:14.874221    8582 command_runner.go:130] > [Service]
	I0222 20:46:14.874225    8582 command_runner.go:130] > Type=notify
	I0222 20:46:14.874229    8582 command_runner.go:130] > Restart=on-failure
	I0222 20:46:14.874234    8582 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0222 20:46:14.874240    8582 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0222 20:46:14.874254    8582 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0222 20:46:14.874260    8582 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0222 20:46:14.874266    8582 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0222 20:46:14.874272    8582 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0222 20:46:14.874277    8582 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0222 20:46:14.874286    8582 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0222 20:46:14.874298    8582 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0222 20:46:14.874304    8582 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0222 20:46:14.874307    8582 command_runner.go:130] > ExecStart=
	I0222 20:46:14.874320    8582 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0222 20:46:14.874325    8582 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0222 20:46:14.874330    8582 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0222 20:46:14.874336    8582 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0222 20:46:14.874339    8582 command_runner.go:130] > LimitNOFILE=infinity
	I0222 20:46:14.874344    8582 command_runner.go:130] > LimitNPROC=infinity
	I0222 20:46:14.874347    8582 command_runner.go:130] > LimitCORE=infinity
	I0222 20:46:14.874353    8582 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0222 20:46:14.874358    8582 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0222 20:46:14.874364    8582 command_runner.go:130] > TasksMax=infinity
	I0222 20:46:14.874368    8582 command_runner.go:130] > TimeoutStartSec=0
	I0222 20:46:14.874373    8582 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0222 20:46:14.874378    8582 command_runner.go:130] > Delegate=yes
	I0222 20:46:14.874387    8582 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0222 20:46:14.874390    8582 command_runner.go:130] > KillMode=process
	I0222 20:46:14.874394    8582 command_runner.go:130] > [Install]
	I0222 20:46:14.874398    8582 command_runner.go:130] > WantedBy=multi-user.target
	I0222 20:46:14.874408    8582 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 20:46:14.874468    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 20:46:14.885992    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:46:14.900791    8582 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0222 20:46:14.900803    8582 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0222 20:46:14.901561    8582 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 20:46:14.980725    8582 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 20:46:15.076788    8582 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 20:46:15.076820    8582 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 20:46:15.090814    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:46:15.182035    8582 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 20:46:15.421323    8582 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 20:46:15.497048    8582 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0222 20:46:15.497162    8582 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0222 20:46:15.572440    8582 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 20:46:15.640210    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:46:15.720205    8582 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0222 20:46:15.731696    8582 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0222 20:46:15.731788    8582 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0222 20:46:15.735925    8582 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0222 20:46:15.735935    8582 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0222 20:46:15.735942    8582 command_runner.go:130] > Device: 10001bh/1048603d	Inode: 206         Links: 1
	I0222 20:46:15.735949    8582 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0222 20:46:15.735955    8582 command_runner.go:130] > Access: 2023-02-23 04:46:15.727929414 +0000
	I0222 20:46:15.735959    8582 command_runner.go:130] > Modify: 2023-02-23 04:46:15.727929414 +0000
	I0222 20:46:15.735964    8582 command_runner.go:130] > Change: 2023-02-23 04:46:15.728929413 +0000
	I0222 20:46:15.735967    8582 command_runner.go:130] >  Birth: -
	I0222 20:46:15.735989    8582 start.go:553] Will wait 60s for crictl version
	I0222 20:46:15.736032    8582 ssh_runner.go:195] Run: which crictl
	I0222 20:46:15.739516    8582 command_runner.go:130] > /usr/bin/crictl
	I0222 20:46:15.739571    8582 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0222 20:46:15.832105    8582 command_runner.go:130] > Version:  0.1.0
	I0222 20:46:15.832122    8582 command_runner.go:130] > RuntimeName:  docker
	I0222 20:46:15.832128    8582 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0222 20:46:15.832135    8582 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0222 20:46:15.834551    8582 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0222 20:46:15.834633    8582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:46:15.859561    8582 command_runner.go:130] > 23.0.1
	I0222 20:46:15.861339    8582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:46:15.885579    8582 command_runner.go:130] > 23.0.1
	I0222 20:46:15.930657    8582 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0222 20:46:15.952562    8582 out.go:177]   - env NO_PROXY=192.168.58.2
	I0222 20:46:15.973605    8582 cli_runner.go:164] Run: docker exec -t multinode-216000-m02 dig +short host.docker.internal
	I0222 20:46:16.097800    8582 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 20:46:16.097914    8582 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 20:46:16.102508    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:46:16.112430    8582 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000 for IP: 192.168.58.3
	I0222 20:46:16.112457    8582 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:46:16.112701    8582 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 20:46:16.112776    8582 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 20:46:16.112787    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0222 20:46:16.112866    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0222 20:46:16.112891    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0222 20:46:16.112911    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0222 20:46:16.113041    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 20:46:16.113119    8582 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 20:46:16.113130    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 20:46:16.113181    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 20:46:16.113219    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 20:46:16.113289    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 20:46:16.113359    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:46:16.113393    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.113432    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem -> /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.113450    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.113837    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 20:46:16.131404    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 20:46:16.149340    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 20:46:16.166865    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 20:46:16.185817    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 20:46:16.203386    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 20:46:16.221611    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 20:46:16.239710    8582 ssh_runner.go:195] Run: openssl version
	I0222 20:46:16.245071    8582 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0222 20:46:16.245411    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 20:46:16.253747    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.257729    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.257824    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.257870    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.262981    8582 command_runner.go:130] > b5213941
	I0222 20:46:16.263329    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 20:46:16.272049    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 20:46:16.280618    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.284725    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.284755    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.284801    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.290228    8582 command_runner.go:130] > 51391683
	I0222 20:46:16.290573    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 20:46:16.298810    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 20:46:16.307038    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.311062    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.311138    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.311184    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.316314    8582 command_runner.go:130] > 3ec20f2e
	I0222 20:46:16.316748    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 20:46:16.325540    8582 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 20:46:16.349447    8582 command_runner.go:130] > cgroupfs
	I0222 20:46:16.351401    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:46:16.351423    8582 cni.go:136] 2 nodes found, recommending kindnet
	I0222 20:46:16.351435    8582 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 20:46:16.351452    8582 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-216000 NodeName:multinode-216000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 20:46:16.351546    8582 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-216000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 20:46:16.351593    8582 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-216000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 20:46:16.351668    8582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0222 20:46:16.359184    8582 command_runner.go:130] > kubeadm
	I0222 20:46:16.359194    8582 command_runner.go:130] > kubectl
	I0222 20:46:16.359197    8582 command_runner.go:130] > kubelet
	I0222 20:46:16.359872    8582 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 20:46:16.359936    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0222 20:46:16.367960    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0222 20:46:16.381661    8582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 20:46:16.394894    8582 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0222 20:46:16.399028    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:46:16.409218    8582 host.go:66] Checking if "multinode-216000" exists ...
	I0222 20:46:16.409395    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:46:16.409412    8582 start.go:301] JoinCluster: &{Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:46:16.409483    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0222 20:46:16.409535    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:46:16.469970    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:46:16.632987    8582 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token up5o8d.t4cvrsg5qdcp35bq --discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf 
	I0222 20:46:16.633040    8582 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0222 20:46:16.633070    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token up5o8d.t4cvrsg5qdcp35bq --discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-216000-m02"
	I0222 20:46:16.675254    8582 command_runner.go:130] > [preflight] Running pre-flight checks
	I0222 20:46:16.797713    8582 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0222 20:46:16.797746    8582 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0222 20:46:16.823782    8582 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 20:46:16.823797    8582 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 20:46:16.823802    8582 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0222 20:46:16.897914    8582 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0222 20:46:18.411749    8582 command_runner.go:130] > This node has joined the cluster:
	I0222 20:46:18.411767    8582 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0222 20:46:18.411775    8582 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0222 20:46:18.411782    8582 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0222 20:46:18.415575    8582 command_runner.go:130] ! W0223 04:46:16.674745    1233 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0222 20:46:18.415594    8582 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0222 20:46:18.415602    8582 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 20:46:18.415617    8582 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token up5o8d.t4cvrsg5qdcp35bq --discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-216000-m02": (1.782555328s)
	I0222 20:46:18.415636    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0222 20:46:18.568267    8582 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0222 20:46:18.568350    8582 start.go:303] JoinCluster complete in 2.15893712s
	I0222 20:46:18.568361    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:46:18.568369    8582 cni.go:136] 2 nodes found, recommending kindnet
	I0222 20:46:18.568508    8582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0222 20:46:18.573989    8582 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0222 20:46:18.574003    8582 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0222 20:46:18.574014    8582 command_runner.go:130] > Device: a6h/166d	Inode: 267135      Links: 1
	I0222 20:46:18.574022    8582 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0222 20:46:18.574028    8582 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0222 20:46:18.574033    8582 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0222 20:46:18.574039    8582 command_runner.go:130] > Change: 2023-02-23 04:22:33.946629303 +0000
	I0222 20:46:18.574043    8582 command_runner.go:130] >  Birth: -
	I0222 20:46:18.574111    8582 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0222 20:46:18.574123    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0222 20:46:18.588101    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0222 20:46:18.782044    8582 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0222 20:46:18.784821    8582 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0222 20:46:18.786626    8582 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0222 20:46:18.794913    8582 command_runner.go:130] > daemonset.apps/kindnet configured
	I0222 20:46:18.802272    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:46:18.802522    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:46:18.802838    8582 round_trippers.go:463] GET https://127.0.0.1:51085/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0222 20:46:18.802845    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.802852    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.802860    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.805222    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.805232    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.805239    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.805246    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.805252    8582 round_trippers.go:580]     Content-Length: 291
	I0222 20:46:18.805257    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.805262    8582 round_trippers.go:580]     Audit-Id: d6b2b061-0c91-4ebe-a4b7-8c37a6dbbb48
	I0222 20:46:18.805267    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.805273    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.805287    8582 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"426","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0222 20:46:18.805341    8582 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-216000" context rescaled to 1 replicas
	I0222 20:46:18.805356    8582 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0222 20:46:18.827722    8582 out.go:177] * Verifying Kubernetes components...
	I0222 20:46:18.868831    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:46:18.880767    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:46:18.941104    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:46:18.941317    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:46:18.941549    8582 node_ready.go:35] waiting up to 6m0s for node "multinode-216000-m02" to be "Ready" ...
	I0222 20:46:18.941594    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:18.941599    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.941605    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.941611    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.944225    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.944241    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.944249    8582 round_trippers.go:580]     Audit-Id: 0daf5ebe-a421-4659-82a4-5db257fa23df
	I0222 20:46:18.944256    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.944261    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.944267    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.944282    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.944287    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.944362    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:18.944558    8582 node_ready.go:49] node "multinode-216000-m02" has status "Ready":"True"
	I0222 20:46:18.944564    8582 node_ready.go:38] duration metric: took 3.007734ms waiting for node "multinode-216000-m02" to be "Ready" ...
	I0222 20:46:18.944569    8582 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 20:46:18.944607    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:18.944611    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.944617    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.944622    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.947719    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:18.947733    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.947743    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.947752    8582 round_trippers.go:580]     Audit-Id: 9cfab1ea-0452-4235-935e-ae7de4df3621
	I0222 20:46:18.947761    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.947769    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.947777    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.947791    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.948876    8582 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"475"},"items":[{"metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0222 20:46:18.950503    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-48v9r" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.950545    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:18.950550    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.950556    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.950562    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.952608    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.952617    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.952623    8582 round_trippers.go:580]     Audit-Id: 1df32b32-d3a0-4ae6-a62b-6ffee63f8bcd
	I0222 20:46:18.952628    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.952652    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.952658    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.952664    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.952670    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.952798    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0222 20:46:18.953052    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:18.953059    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.953065    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.953071    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.955527    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.955536    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.955542    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.955546    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.955552    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.955558    8582 round_trippers.go:580]     Audit-Id: ef170e8e-5244-4a78-a42f-2768561564d9
	I0222 20:46:18.955563    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.955571    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.955640    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:18.955830    8582 pod_ready.go:92] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:18.955835    8582 pod_ready.go:81] duration metric: took 5.323974ms waiting for pod "coredns-787d4945fb-48v9r" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.955841    8582 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.955878    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/etcd-multinode-216000
	I0222 20:46:18.955884    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.955890    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.955895    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.958367    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.958377    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.958383    8582 round_trippers.go:580]     Audit-Id: d81426b0-1296-473d-a4ef-9f51011fd757
	I0222 20:46:18.958388    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.958394    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.958399    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.958404    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.958410    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.958455    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-216000","namespace":"kube-system","uid":"c2b06896-f123-48bd-8603-0d7493488f5c","resourceVersion":"389","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2d051eb8eb3728481071a1fb944f8fb9","kubernetes.io/config.mirror":"2d051eb8eb3728481071a1fb944f8fb9","kubernetes.io/config.seen":"2023-02-23T04:45:32.257428627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0222 20:46:18.958683    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:18.958689    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.958695    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.958701    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.960893    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.960902    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.960908    8582 round_trippers.go:580]     Audit-Id: bfa68149-876f-4291-8787-2b94f01b62f1
	I0222 20:46:18.960913    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.960918    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.960923    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.960928    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.960933    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.961133    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:18.961322    8582 pod_ready.go:92] pod "etcd-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:18.961328    8582 pod_ready.go:81] duration metric: took 5.483121ms waiting for pod "etcd-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.961341    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.961373    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-216000
	I0222 20:46:18.961378    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.961386    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.961392    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.963651    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.963661    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.963666    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.963671    8582 round_trippers.go:580]     Audit-Id: d78e3e31-7cb7-4746-ae32-bdb4e869b316
	I0222 20:46:18.963677    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.963682    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.963700    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.963709    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.963796    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-216000","namespace":"kube-system","uid":"a28861be-afed-4463-a3c0-e438a5122dc8","resourceVersion":"276","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3327d28d34b6df60d7e253c5892d1f22","kubernetes.io/config.mirror":"3327d28d34b6df60d7e253c5892d1f22","kubernetes.io/config.seen":"2023-02-23T04:45:32.257429393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0222 20:46:18.964078    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:18.964084    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.964092    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.964097    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.966297    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.966305    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.966311    8582 round_trippers.go:580]     Audit-Id: 1030b8a7-65b7-494a-8b3e-ee25fa64c27e
	I0222 20:46:18.966316    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.966321    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.966327    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.966334    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.966340    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.966390    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:18.966581    8582 pod_ready.go:92] pod "kube-apiserver-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:18.966587    8582 pod_ready.go:81] duration metric: took 5.24023ms waiting for pod "kube-apiserver-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.966593    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.966620    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-216000
	I0222 20:46:18.966624    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.966629    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.966635    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.968557    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:18.968568    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.968575    8582 round_trippers.go:580]     Audit-Id: 0e1d7f2b-137b-4df3-9df0-2aadcbbacb16
	I0222 20:46:18.968582    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.968587    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.968592    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.968598    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.968603    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.968665    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-216000","namespace":"kube-system","uid":"a851a311-37aa-46d5-9152-a95acbbc88ec","resourceVersion":"272","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e1371af7f33022153b0d8ba7783d4fc9","kubernetes.io/config.mirror":"e1371af7f33022153b0d8ba7783d4fc9","kubernetes.io/config.seen":"2023-02-23T04:45:32.257424246Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0222 20:46:18.968925    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:18.968931    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.968937    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.968942    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.970812    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:18.970821    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.970826    8582 round_trippers.go:580]     Audit-Id: fab1ed04-ee22-44a1-bcb5-d76f9046f7f4
	I0222 20:46:18.970831    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.970837    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.970844    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.970850    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.970855    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.970899    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:18.971069    8582 pod_ready.go:92] pod "kube-controller-manager-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:18.971075    8582 pod_ready.go:81] duration metric: took 4.476472ms waiting for pod "kube-controller-manager-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.971080    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-46778" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:19.143681    8582 request.go:622] Waited for 172.562109ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:19.143744    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:19.143751    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:19.143760    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:19.143769    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:19.146804    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:19.146815    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:19.146820    8582 round_trippers.go:580]     Audit-Id: cbf86953-050d-4bba-ade2-9de2630b05ba
	I0222 20:46:19.146825    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:19.146830    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:19.146836    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:19.146842    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:19.146846    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:19 GMT
	I0222 20:46:19.146907    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"466","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0222 20:46:19.341622    8582 request.go:622] Waited for 194.490892ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:19.341670    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:19.341675    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:19.341682    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:19.341687    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:19.344475    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:19.344484    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:19.344490    8582 round_trippers.go:580]     Audit-Id: 39093815-173f-4ad3-ad79-0c5d9d8a3ba3
	I0222 20:46:19.344504    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:19.344510    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:19.344515    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:19.344521    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:19.344526    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:19 GMT
	I0222 20:46:19.344597    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:19.846693    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:19.846709    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:19.846717    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:19.846725    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:19.849415    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:19.849424    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:19.849429    8582 round_trippers.go:580]     Audit-Id: 98d2f8f1-6459-4728-bb89-e0f375564544
	I0222 20:46:19.849435    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:19.849445    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:19.849451    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:19.849455    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:19.849460    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:19 GMT
	I0222 20:46:19.849528    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:19.849815    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:19.849822    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:19.849831    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:19.849839    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:19.852041    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:19.852050    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:19.852055    8582 round_trippers.go:580]     Audit-Id: 6e8c45f3-9b6c-45d3-b226-20d6e17614dd
	I0222 20:46:19.852060    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:19.852066    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:19.852070    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:19.852075    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:19.852080    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:19 GMT
	I0222 20:46:19.852245    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:20.346864    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:20.346886    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:20.346905    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:20.346921    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:20.351121    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:46:20.351142    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:20.351152    8582 round_trippers.go:580]     Audit-Id: c8258f38-86b3-4310-b3f4-2bd897ede14e
	I0222 20:46:20.351158    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:20.351165    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:20.351170    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:20.351174    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:20.351179    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:20 GMT
	I0222 20:46:20.351473    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:20.351755    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:20.351762    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:20.351768    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:20.351777    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:20.353995    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:20.354008    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:20.354013    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:20.354018    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:20.354023    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:20.354027    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:20.354032    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:20 GMT
	I0222 20:46:20.354038    8582 round_trippers.go:580]     Audit-Id: b76772b3-39e1-4239-acca-6bbeb1a3418c
	I0222 20:46:20.354109    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:20.846616    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:20.846635    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:20.846644    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:20.846652    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:20.849934    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:20.849944    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:20.849951    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:20.849956    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:20 GMT
	I0222 20:46:20.849960    8582 round_trippers.go:580]     Audit-Id: f7941260-563a-468e-a52d-5bf0bf4e524e
	I0222 20:46:20.849965    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:20.849970    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:20.849975    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:20.850041    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:20.850322    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:20.850331    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:20.850337    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:20.850350    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:20.852772    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:20.852783    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:20.852788    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:20 GMT
	I0222 20:46:20.852824    8582 round_trippers.go:580]     Audit-Id: 847eec7f-3c05-4ceb-a2bb-56f9f4de0cb9
	I0222 20:46:20.852830    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:20.852835    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:20.852839    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:20.852844    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:20.853032    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:21.346841    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:21.346863    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:21.346875    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:21.346886    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:21.350603    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:21.350620    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:21.350632    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:21.350639    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:21.350647    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:21.350654    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:21.350660    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:21 GMT
	I0222 20:46:21.350667    8582 round_trippers.go:580]     Audit-Id: 2dcf2bfc-db06-4c07-8b61-cb087f692f62
	I0222 20:46:21.351229    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:21.351507    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:21.351513    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:21.351519    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:21.351525    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:21.353613    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:21.353622    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:21.353628    8582 round_trippers.go:580]     Audit-Id: b59f483d-5073-410c-99d3-e012ea3f39cb
	I0222 20:46:21.353633    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:21.353638    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:21.353643    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:21.353651    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:21.353656    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:21 GMT
	I0222 20:46:21.353702    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:21.353863    8582 pod_ready.go:102] pod "kube-proxy-46778" in "kube-system" namespace has status "Ready":"False"
	I0222 20:46:21.846722    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:21.846756    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:21.846763    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:21.846768    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:21.849827    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:21.849841    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:21.849847    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:21.849852    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:21 GMT
	I0222 20:46:21.849857    8582 round_trippers.go:580]     Audit-Id: b78c7bcc-be7d-4078-ad7f-2c82c36301fa
	I0222 20:46:21.849866    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:21.849875    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:21.849881    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:21.849974    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:21.850242    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:21.850248    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:21.850254    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:21.850259    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:21.852583    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:21.852594    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:21.852601    8582 round_trippers.go:580]     Audit-Id: 8572c98f-6fe0-446c-9575-eee15f51a854
	I0222 20:46:21.852612    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:21.852623    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:21.852634    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:21.852642    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:21.852651    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:21 GMT
	I0222 20:46:21.852808    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:22.346661    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:22.346689    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.346703    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.346713    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.349738    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:22.349754    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.349764    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.349772    8582 round_trippers.go:580]     Audit-Id: 87d263c0-76b3-4882-91c6-346a3caa7e3a
	I0222 20:46:22.349780    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.349789    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.349797    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.349806    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.350089    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"488","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0222 20:46:22.350491    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:22.350501    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.350511    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.350521    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.353025    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.353041    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.353049    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.353055    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.353059    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.353065    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.353070    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.353076    8582 round_trippers.go:580]     Audit-Id: f66995c1-079c-4b0f-9c28-a9463dba62b6
	I0222 20:46:22.353152    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:22.353396    8582 pod_ready.go:92] pod "kube-proxy-46778" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:22.353408    8582 pod_ready.go:81] duration metric: took 3.382361853s waiting for pod "kube-proxy-46778" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.353414    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgxrw" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.353456    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-fgxrw
	I0222 20:46:22.353461    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.353467    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.353472    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.356032    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.356044    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.356053    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.356065    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.356078    8582 round_trippers.go:580]     Audit-Id: 22dc97bb-cb4c-4bbf-9d47-8c11c650cca8
	I0222 20:46:22.356087    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.356099    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.356106    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.356404    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fgxrw","generateName":"kube-proxy-","namespace":"kube-system","uid":"7402cf62-2944-469b-9c38-0447377d4579","resourceVersion":"393","creationTimestamp":"2023-02-23T04:45:44Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0222 20:46:22.356669    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:22.356676    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.356682    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.356688    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.358924    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.358935    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.358941    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.358945    8582 round_trippers.go:580]     Audit-Id: 22231229-4a0a-4731-862f-45405e118087
	I0222 20:46:22.358950    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.358955    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.358962    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.358969    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.359200    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:22.359400    8582 pod_ready.go:92] pod "kube-proxy-fgxrw" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:22.359406    8582 pod_ready.go:81] duration metric: took 5.986994ms waiting for pod "kube-proxy-fgxrw" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.359413    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.359447    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-216000
	I0222 20:46:22.359452    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.359457    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.359463    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.361952    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.361962    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.361968    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.361973    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.361978    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.361983    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.361988    8582 round_trippers.go:580]     Audit-Id: 81ec07f5-fd93-466c-96c8-71262db3993e
	I0222 20:46:22.361993    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.362043    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-216000","namespace":"kube-system","uid":"a77cec17-0ffa-4b1b-91b0-aa6367fc7848","resourceVersion":"270","creationTimestamp":"2023-02-23T04:45:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e812827214b9960209c3ba4dcd668c3","kubernetes.io/config.mirror":"0e812827214b9960209c3ba4dcd668c3","kubernetes.io/config.seen":"2023-02-23T04:45:22.142158982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0222 20:46:22.542321    8582 request.go:622] Waited for 180.040624ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:22.542413    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:22.542423    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.542434    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.542445    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.546662    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:46:22.546675    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.546681    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.546686    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.546696    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.546700    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.546705    8582 round_trippers.go:580]     Audit-Id: 0bd68af7-a048-4260-a55d-273668ed8a1c
	I0222 20:46:22.546711    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.546772    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:22.546974    8582 pod_ready.go:92] pod "kube-scheduler-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:22.546979    8582 pod_ready.go:81] duration metric: took 187.56463ms waiting for pod "kube-scheduler-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.546986    8582 pod_ready.go:38] duration metric: took 3.602451551s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 20:46:22.546996    8582 system_svc.go:44] waiting for kubelet service to be running ....
	I0222 20:46:22.547057    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:46:22.557100    8582 system_svc.go:56] duration metric: took 10.100547ms WaitForService to wait for kubelet.
	I0222 20:46:22.557117    8582 kubeadm.go:578] duration metric: took 3.751788104s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0222 20:46:22.557128    8582 node_conditions.go:102] verifying NodePressure condition ...
	I0222 20:46:22.741631    8582 request.go:622] Waited for 184.467541ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes
	I0222 20:46:22.741670    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes
	I0222 20:46:22.741677    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.741685    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.741691    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.744324    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.744334    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.744340    8582 round_trippers.go:580]     Audit-Id: 38ce9f3c-238e-4507-b050-635b3ac809a7
	I0222 20:46:22.744345    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.744350    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.744358    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.744363    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.744368    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.744458    8582 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10171 chars]
	I0222 20:46:22.744767    8582 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 20:46:22.744774    8582 node_conditions.go:123] node cpu capacity is 6
	I0222 20:46:22.744780    8582 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 20:46:22.744785    8582 node_conditions.go:123] node cpu capacity is 6
	I0222 20:46:22.744789    8582 node_conditions.go:105] duration metric: took 187.659547ms to run NodePressure ...
	I0222 20:46:22.744796    8582 start.go:228] waiting for startup goroutines ...
	I0222 20:46:22.744814    8582 start.go:242] writing updated cluster config ...
	I0222 20:46:22.745146    8582 ssh_runner.go:195] Run: rm -f paused
	I0222 20:46:22.784119    8582 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0222 20:46:22.807369    8582 out.go:177] * Done! kubectl is now configured to use "multinode-216000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 04:45:13 UTC, end at Thu 2023-02-23 04:46:30 UTC. --
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499542152Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499564051Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499572853Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499632214Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499654897Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499703557Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499723024Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499737981Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499759117Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499973420Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499997160Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.500427055Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.507259663Z" level=info msg="Loading containers: start."
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.587748519Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.621278170Z" level=info msg="Loading containers: done."
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.630089727Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.630149440Z" level=info msg="Daemon has completed initialization"
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.651125010Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 04:45:17 multinode-216000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.655811050Z" level=info msg="API listen on [::]:2376"
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.662886613Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 04:45:59 multinode-216000 dockerd[831]: time="2023-02-23T04:45:59.752398209Z" level=info msg="ignoring event" container=c55ff201a3beafc9c7019ee48716439f5997eba482a3bdfec5f22e3fa91db8a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 04:45:59 multinode-216000 dockerd[831]: time="2023-02-23T04:45:59.858374176Z" level=info msg="ignoring event" container=fbcd20014202d62fae727a61457015133a4625ca6c475ea4175764118df8ca5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 04:46:00 multinode-216000 dockerd[831]: time="2023-02-23T04:46:00.749070438Z" level=info msg="ignoring event" container=027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 04:46:00 multinode-216000 dockerd[831]: time="2023-02-23T04:46:00.815707996Z" level=info msg="ignoring event" container=1f23609febdb93b06584e2b8dcfd321b7de2e61770d21055d57f831e411a6658 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	d817db693e5ff       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 seconds ago        Running             busybox                   0                   cf4fffb0d75b0
	fb3f53c39a6de       5185b96f0becf                                                                                         30 seconds ago       Running             coredns                   1                   83ecfda61b7c3
	fbcd25148deb8       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              42 seconds ago       Running             kindnet-cni               0                   9a583941b0c3a
	92a561568dbbc       6e38f40d628db                                                                                         44 seconds ago       Running             storage-provisioner       0                   2101cb58e3875
	c55ff201a3bea       5185b96f0becf                                                                                         44 seconds ago       Exited              coredns                   0                   fbcd20014202d
	88291aae322ac       46a6bb3c77ce0                                                                                         45 seconds ago       Running             kube-proxy                0                   018b2cd0c3e66
	6b81e4fbf6fb8       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   428f6e799d799
	7e0db19194ff3       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   b75a9eb44907f
	f3b7205a3e76d       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   bc028811fdb89
	ab226fd8fda30       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   16a57a2f27e7d
	
	* 
	* ==> coredns [c55ff201a3be] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 3457779542163645706.7867643797966139542. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 3457779542163645706.7867643797966139542. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> coredns [fb3f53c39a6d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:38494 - 5691 "HINFO IN 8213277836580515030.8808638030112362167. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014422298s
	[INFO] 10.244.0.3:53106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171719s
	[INFO] 10.244.0.3:47359 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.004571152s
	[INFO] 10.244.0.3:57995 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00382021s
	[INFO] 10.244.0.3:43987 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006539722s
	[INFO] 10.244.0.3:41933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137221s
	[INFO] 10.244.0.3:52335 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006567973s
	[INFO] 10.244.0.3:34458 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00016992s
	[INFO] 10.244.0.3:50072 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127485s
	[INFO] 10.244.0.3:55564 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003918775s
	[INFO] 10.244.0.3:48263 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132491s
	[INFO] 10.244.0.3:36574 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096616s
	[INFO] 10.244.0.3:34001 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084501s
	[INFO] 10.244.0.3:58616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099802s
	[INFO] 10.244.0.3:39839 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069938s
	[INFO] 10.244.0.3:60998 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080043s
	[INFO] 10.244.0.3:60140 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127766s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-216000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-216000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321
	                    minikube.k8s.io/name=multinode-216000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_22T20_45_33_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 04:45:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-216000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 04:46:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 04:46:03 +0000   Thu, 23 Feb 2023 04:45:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 04:46:03 +0000   Thu, 23 Feb 2023 04:45:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 04:46:03 +0000   Thu, 23 Feb 2023 04:45:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 04:46:03 +0000   Thu, 23 Feb 2023 04:45:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-216000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    14aace2c-fe48-40d9-b364-15d456a94896
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-c4gl8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-787d4945fb-48v9r                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     46s
	  kube-system                 etcd-multinode-216000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 kindnet-m7gzm                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      47s
	  kube-system                 kube-apiserver-multinode-216000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-controller-manager-multinode-216000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-proxy-fgxrw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kube-scheduler-multinode-216000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 45s   kube-proxy       
	  Normal  Starting                 59s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  59s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node multinode-216000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node multinode-216000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node multinode-216000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s   node-controller  Node multinode-216000 event: Registered Node multinode-216000 in Controller
	
	
	Name:               multinode-216000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-216000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 04:46:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-216000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 04:46:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 04:46:18 +0000   Thu, 23 Feb 2023 04:46:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 04:46:18 +0000   Thu, 23 Feb 2023 04:46:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 04:46:18 +0000   Thu, 23 Feb 2023 04:46:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 04:46:18 +0000   Thu, 23 Feb 2023 04:46:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-216000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    14aace2c-fe48-40d9-b364-15d456a94896
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-mhxxv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-7vj2s               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      14s
	  kube-system                 kube-proxy-46778            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 8s                 kube-proxy       
	  Normal  Starting                 14s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s (x2 over 14s)  kubelet          Node multinode-216000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x2 over 14s)  kubelet          Node multinode-216000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x2 over 14s)  kubelet          Node multinode-216000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13s                kubelet          Node multinode-216000-m02 status is now: NodeReady
	  Normal  RegisteredNode           12s                node-controller  Node multinode-216000-m02 event: Registered Node multinode-216000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000081] FS-Cache: O-key=[8] '9b91130600000000'
	[  +0.000132] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000083] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=00000000defe59bd
	[  +0.000064] FS-Cache: N-key=[8] '9b91130600000000'
	[  +0.003548] FS-Cache: Duplicate cookie detected
	[  +0.000041] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000053] FS-Cache: O-cookie d=00000000d375b396{9p.inode} n=00000000431c20f9
	[  +0.000062] FS-Cache: O-key=[8] '9b91130600000000'
	[  +0.000127] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000080] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=0000000013b0fbbe
	[  +0.000045] FS-Cache: N-key=[8] '9b91130600000000'
	[  +3.557940] FS-Cache: Duplicate cookie detected
	[  +0.000036] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000054] FS-Cache: O-cookie d=00000000d375b396{9p.inode} n=000000005612b0fe
	[  +0.000059] FS-Cache: O-key=[8] '9a91130600000000'
	[  +0.000042] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000042] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=000000007465c420
	[  +0.000051] FS-Cache: N-key=[8] '9a91130600000000'
	[  +0.500925] FS-Cache: Duplicate cookie detected
	[  +0.000054] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000033] FS-Cache: O-cookie d=00000000d375b396{9p.inode} n=0000000059e8f346
	[  +0.000062] FS-Cache: O-key=[8] 'b991130600000000'
	[  +0.000047] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000043] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=000000007c126f1c
	[  +0.000043] FS-Cache: N-key=[8] 'b991130600000000'
	
	* 
	* ==> etcd [ab226fd8fda3] <==
	* {"level":"info","ts":"2023-02-23T04:45:27.055Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T04:45:27.055Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T04:45:27.055Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T04:45:27.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-02-23T04:45:27.056Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-216000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T04:45:27.351Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T04:45:27.351Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T04:45:27.351Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T04:45:27.351Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T04:45:27.352Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-23T04:46:07.245Z","caller":"traceutil/trace.go:171","msg":"trace[145973251] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"197.676001ms","start":"2023-02-23T04:46:07.047Z","end":"2023-02-23T04:46:07.245Z","steps":["trace[145973251] 'process raft request'  (duration: 197.575679ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T04:46:09.495Z","caller":"traceutil/trace.go:171","msg":"trace[172939223] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"243.247847ms","start":"2023-02-23T04:46:09.252Z","end":"2023-02-23T04:46:09.495Z","steps":["trace[172939223] 'process raft request'  (duration: 243.077813ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  04:46:31 up 45 min,  0 users,  load average: 2.18, 1.58, 0.97
	Linux multinode-216000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [fbcd25148deb] <==
	* I0223 04:45:48.822497       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0223 04:45:48.822551       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0223 04:45:48.822650       1 main.go:116] setting mtu 1500 for CNI 
	I0223 04:45:48.822686       1 main.go:146] kindnetd IP family: "ipv4"
	I0223 04:45:48.822699       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0223 04:45:49.316901       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:45:49.316949       1 main.go:227] handling current node
	I0223 04:45:59.424569       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:45:59.424609       1 main.go:227] handling current node
	I0223 04:46:09.497140       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:46:09.497200       1 main.go:227] handling current node
	I0223 04:46:19.501812       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:46:19.501852       1 main.go:227] handling current node
	I0223 04:46:19.501861       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 04:46:19.501865       1 main.go:250] Node multinode-216000-m02 has CIDR [10.244.1.0/24] 
	I0223 04:46:19.502030       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0223 04:46:29.509653       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:46:29.509696       1 main.go:227] handling current node
	I0223 04:46:29.509704       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 04:46:29.509709       1 main.go:250] Node multinode-216000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [f3b7205a3e76] <==
	* I0223 04:45:29.247157       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 04:45:29.247344       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 04:45:29.247681       1 cache.go:39] Caches are synced for autoregister controller
	I0223 04:45:29.247906       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 04:45:29.248031       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 04:45:29.249517       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 04:45:29.249533       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 04:45:29.250361       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 04:45:29.263698       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 04:45:29.964125       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 04:45:30.152043       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0223 04:45:30.154737       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0223 04:45:30.154831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 04:45:30.574866       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 04:45:30.637844       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 04:45:30.689286       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0223 04:45:30.694357       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0223 04:45:30.695102       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 04:45:30.698683       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0223 04:45:31.182895       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 04:45:32.146672       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 04:45:32.154470       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0223 04:45:32.161585       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 04:45:44.338211       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0223 04:45:44.837385       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [6b81e4fbf6fb] <==
	* I0223 04:45:44.187559       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0223 04:45:44.187627       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0223 04:45:44.221072       1 shared_informer.go:280] Caches are synced for disruption
	I0223 04:45:44.233337       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 04:45:44.240945       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 04:45:44.341529       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0223 04:45:44.554337       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 04:45:44.598648       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 04:45:44.598698       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0223 04:45:44.844139       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fgxrw"
	I0223 04:45:44.845869       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-m7gzm"
	I0223 04:45:44.944557       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0223 04:45:45.064311       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-j4pt7"
	I0223 04:45:45.076675       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-48v9r"
	I0223 04:45:45.144962       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-j4pt7"
	W0223 04:46:17.605968       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-216000-m02" does not exist
	I0223 04:46:17.609169       1 range_allocator.go:372] Set node multinode-216000-m02 PodCIDR to [10.244.1.0/24]
	I0223 04:46:17.612695       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7vj2s"
	I0223 04:46:17.612992       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-46778"
	W0223 04:46:18.219641       1 topologycache.go:232] Can't get CPU or zone information for multinode-216000-m02 node
	W0223 04:46:19.043278       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-216000-m02. Assuming now as a timestamp.
	I0223 04:46:19.043567       1 event.go:294] "Event occurred" object="multinode-216000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-216000-m02 event: Registered Node multinode-216000-m02 in Controller"
	I0223 04:46:23.783575       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 04:46:23.839991       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-mhxxv"
	I0223 04:46:23.852273       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-c4gl8"
	
	* 
	* ==> kube-proxy [88291aae322a] <==
	* I0223 04:45:45.846443       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0223 04:45:45.846531       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0223 04:45:45.846552       1 server_others.go:535] "Using iptables proxy"
	I0223 04:45:45.929802       1 server_others.go:176] "Using iptables Proxier"
	I0223 04:45:45.929850       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0223 04:45:45.929857       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0223 04:45:45.929873       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0223 04:45:45.929896       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 04:45:45.930339       1 server.go:655] "Version info" version="v1.26.1"
	I0223 04:45:45.930378       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 04:45:45.936478       1 config.go:317] "Starting service config controller"
	I0223 04:45:45.936505       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 04:45:45.936611       1 config.go:444] "Starting node config controller"
	I0223 04:45:45.936617       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 04:45:45.936756       1 config.go:226] "Starting endpoint slice config controller"
	I0223 04:45:45.936766       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 04:45:46.037358       1 shared_informer.go:280] Caches are synced for node config
	I0223 04:45:46.037404       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 04:45:46.037415       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [7e0db19194ff] <==
	* W0223 04:45:29.222047       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0223 04:45:29.222098       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0223 04:45:29.222182       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 04:45:29.222239       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 04:45:29.222181       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0223 04:45:29.222310       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0223 04:45:29.222405       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 04:45:29.222416       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 04:45:29.222631       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0223 04:45:29.222689       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0223 04:45:30.152931       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0223 04:45:30.152952       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0223 04:45:30.180144       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 04:45:30.180189       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0223 04:45:30.180823       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 04:45:30.180862       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 04:45:30.242456       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 04:45:30.242503       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 04:45:30.276564       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 04:45:30.276608       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 04:45:30.318777       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 04:45:30.318844       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 04:45:30.577855       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0223 04:45:30.577877       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0223 04:45:33.182849       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 04:45:13 UTC, end at Thu 2023-02-23 04:46:32 UTC. --
	Feb 23 04:45:48 multinode-216000 kubelet[2181]: I0223 04:45:48.061397    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-j4pt7" podStartSLOduration=3.061370814 pod.CreationTimestamp="2023-02-23 04:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:45:48.061045884 +0000 UTC m=+15.931163573" watchObservedRunningTime="2023-02-23 04:45:48.061370814 +0000 UTC m=+15.931488509"
	Feb 23 04:45:48 multinode-216000 kubelet[2181]: I0223 04:45:48.517819    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fgxrw" podStartSLOduration=4.517759268 pod.CreationTimestamp="2023-02-23 04:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:45:48.517541007 +0000 UTC m=+16.387658717" watchObservedRunningTime="2023-02-23 04:45:48.517759268 +0000 UTC m=+16.387876977"
	Feb 23 04:45:49 multinode-216000 kubelet[2181]: I0223 04:45:49.262988    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.262959767 pod.CreationTimestamp="2023-02-23 04:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:45:48.86094261 +0000 UTC m=+16.731060299" watchObservedRunningTime="2023-02-23 04:45:49.262959767 +0000 UTC m=+17.133077455"
	Feb 23 04:45:49 multinode-216000 kubelet[2181]: I0223 04:45:49.263096    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-m7gzm" podStartSLOduration=-9.223372031591692e+09 pod.CreationTimestamp="2023-02-23 04:45:44 +0000 UTC" firstStartedPulling="2023-02-23 04:45:45.746711806 +0000 UTC m=+13.616829491" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:45:49.262844914 +0000 UTC m=+17.132962604" watchObservedRunningTime="2023-02-23 04:45:49.263083637 +0000 UTC m=+17.133201326"
	Feb 23 04:45:53 multinode-216000 kubelet[2181]: I0223 04:45:53.246477    2181 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 23 04:45:53 multinode-216000 kubelet[2181]: I0223 04:45:53.247416    2181 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.261501    2181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbcd20014202d62fae727a61457015133a4625ca6c475ea4175764118df8ca5d"
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.261544    2181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83ecfda61b7c397560d774e70af16d14bf264b3bc61aabeedc234596f9ce2aea"
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.971588    2181 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f320eeed-16b9-4969-b449-323abb78b55f-config-volume\") pod \"f320eeed-16b9-4969-b449-323abb78b55f\" (UID: \"f320eeed-16b9-4969-b449-323abb78b55f\") "
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.971694    2181 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj7nd\" (UniqueName: \"kubernetes.io/projected/f320eeed-16b9-4969-b449-323abb78b55f-kube-api-access-cj7nd\") pod \"f320eeed-16b9-4969-b449-323abb78b55f\" (UID: \"f320eeed-16b9-4969-b449-323abb78b55f\") "
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: W0223 04:46:00.972232    2181 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f320eeed-16b9-4969-b449-323abb78b55f/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.972522    2181 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f320eeed-16b9-4969-b449-323abb78b55f-config-volume" (OuterVolumeSpecName: "config-volume") pod "f320eeed-16b9-4969-b449-323abb78b55f" (UID: "f320eeed-16b9-4969-b449-323abb78b55f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.974609    2181 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f320eeed-16b9-4969-b449-323abb78b55f-kube-api-access-cj7nd" (OuterVolumeSpecName: "kube-api-access-cj7nd") pod "f320eeed-16b9-4969-b449-323abb78b55f" (UID: "f320eeed-16b9-4969-b449-323abb78b55f"). InnerVolumeSpecName "kube-api-access-cj7nd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.072080    2181 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f320eeed-16b9-4969-b449-323abb78b55f-config-volume\") on node \"multinode-216000\" DevicePath \"\""
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.072182    2181 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-cj7nd\" (UniqueName: \"kubernetes.io/projected/f320eeed-16b9-4969-b449-323abb78b55f-kube-api-access-cj7nd\") on node \"multinode-216000\" DevicePath \"\""
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.285702    2181 scope.go:115] "RemoveContainer" containerID="027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13"
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.299321    2181 scope.go:115] "RemoveContainer" containerID="027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13"
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: E0223 04:46:01.299992    2181 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13" containerID="027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13"
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.300035    2181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13} err="failed to get container status \"027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13\": rpc error: code = Unknown desc = Error: No such container: 027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13"
	Feb 23 04:46:02 multinode-216000 kubelet[2181]: I0223 04:46:02.358773    2181 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f320eeed-16b9-4969-b449-323abb78b55f path="/var/lib/kubelet/pods/f320eeed-16b9-4969-b449-323abb78b55f/volumes"
	Feb 23 04:46:23 multinode-216000 kubelet[2181]: I0223 04:46:23.864736    2181 topology_manager.go:210] "Topology Admit Handler"
	Feb 23 04:46:23 multinode-216000 kubelet[2181]: E0223 04:46:23.864809    2181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f320eeed-16b9-4969-b449-323abb78b55f" containerName="coredns"
	Feb 23 04:46:23 multinode-216000 kubelet[2181]: I0223 04:46:23.864843    2181 memory_manager.go:346] "RemoveStaleState removing state" podUID="f320eeed-16b9-4969-b449-323abb78b55f" containerName="coredns"
	Feb 23 04:46:24 multinode-216000 kubelet[2181]: I0223 04:46:24.031828    2181 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp78h\" (UniqueName: \"kubernetes.io/projected/d3e6682b-35f9-4054-bf12-86ca2b50d6ad-kube-api-access-wp78h\") pod \"busybox-6b86dd6d48-c4gl8\" (UID: \"d3e6682b-35f9-4054-bf12-86ca2b50d6ad\") " pod="default/busybox-6b86dd6d48-c4gl8"
	Feb 23 04:46:27 multinode-216000 kubelet[2181]: I0223 04:46:27.458885    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-c4gl8" podStartSLOduration=-9.22337203239592e+09 pod.CreationTimestamp="2023-02-23 04:46:23 +0000 UTC" firstStartedPulling="2023-02-23 04:46:24.411232126 +0000 UTC m=+52.281697299" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:46:27.458738496 +0000 UTC m=+55.329528276" watchObservedRunningTime="2023-02-23 04:46:27.458855135 +0000 UTC m=+55.329644921"
	
	* 
	* ==> storage-provisioner [92a561568dbb] <==
	* I0223 04:45:46.870515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0223 04:45:46.920473       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0223 04:45:46.920593       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0223 04:45:46.928535       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0223 04:45:46.928644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed37d3f7-cde9-4eac-aad3-316d2cb56d11", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-216000_8f2a23e5-0bc1-4427-bb86-23d8c8c27eb8 became leader
	I0223 04:45:46.928692       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-216000_8f2a23e5-0bc1-4427-bb86-23d8c8c27eb8!
	I0223 04:45:47.028878       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-216000_8f2a23e5-0bc1-4427-bb86-23d8c8c27eb8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-216000 -n multinode-216000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-216000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (9.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-c4gl8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-c4gl8 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-mhxxv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: minikbue host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-mhxxv -- sh -c "ping -c 1 <nil>"
multinode_test.go:558: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-216000 -- exec busybox-6b86dd6d48-mhxxv -- sh -c "ping -c 1 <nil>": exit status 2 (156.662208ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:559: Failed to ping host (<nil>) from pod (busybox-6b86dd6d48-mhxxv): exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-216000
helpers_test.go:235: (dbg) docker inspect multinode-216000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959",
	        "Created": "2023-02-23T04:45:13.001856261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 91099,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T04:45:13.302858879Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959/hosts",
	        "LogPath": "/var/lib/docker/containers/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959/d1c0f7655c15eef1efe5c6d58c3c78df06722a69dd030f3c34b9839d94567959-json.log",
	        "Name": "/multinode-216000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-216000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-216000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/401f4b41ed469b58bb165683b6d891023fa70018515810566276a95365c01965-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/401f4b41ed469b58bb165683b6d891023fa70018515810566276a95365c01965/merged",
	                "UpperDir": "/var/lib/docker/overlay2/401f4b41ed469b58bb165683b6d891023fa70018515810566276a95365c01965/diff",
	                "WorkDir": "/var/lib/docker/overlay2/401f4b41ed469b58bb165683b6d891023fa70018515810566276a95365c01965/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-216000",
	                "Source": "/var/lib/docker/volumes/multinode-216000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-216000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-216000",
	                "name.minikube.sigs.k8s.io": "multinode-216000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d1c2184d95f5b2cfb1b864dc674bd5ec65e2eab2a6e3049daa7f510b2cbbfd3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51081"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51084"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7d1c2184d95f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-216000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d1c0f7655c15",
	                        "multinode-216000"
	                    ],
	                    "NetworkID": "e104cc785eb296a0aa06f78ef3ef072e8cf133e0149d2eac0fdc506bb97fa0a6",
	                    "EndpointID": "bc51ae122101bda0410b593b0e1a23a47ed9855bf39114751624238086d03650",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-216000 -n multinode-216000
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-216000 logs -n 25: (2.601188181s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-621000                           | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-621000 ssh -- ls                    | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-599000                           | mount-start-1-599000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-621000 ssh -- ls                    | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-621000                           | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:44 PST |
	| start   | -p mount-start-2-621000                           | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:44 PST | 22 Feb 23 20:45 PST |
	| ssh     | mount-start-2-621000 ssh -- ls                    | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:45 PST | 22 Feb 23 20:45 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-621000                           | mount-start-2-621000 | jenkins | v1.29.0 | 22 Feb 23 20:45 PST | 22 Feb 23 20:45 PST |
	| delete  | -p mount-start-1-599000                           | mount-start-1-599000 | jenkins | v1.29.0 | 22 Feb 23 20:45 PST | 22 Feb 23 20:45 PST |
	| start   | -p multinode-216000                               | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:45 PST | 22 Feb 23 20:46 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- apply -f                   | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- rollout                    | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- get pods -o                | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- get pods -o                | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | busybox-6b86dd6d48-c4gl8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST |                     |
	|         | busybox-6b86dd6d48-mhxxv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | busybox-6b86dd6d48-c4gl8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST |                     |
	|         | busybox-6b86dd6d48-mhxxv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | busybox-6b86dd6d48-c4gl8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST |                     |
	|         | busybox-6b86dd6d48-mhxxv -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- get pods -o                | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | busybox-6b86dd6d48-c4gl8                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | busybox-6b86dd6d48-c4gl8 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.65.2                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST | 22 Feb 23 20:46 PST |
	|         | busybox-6b86dd6d48-mhxxv                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-216000 -- exec                       | multinode-216000     | jenkins | v1.29.0 | 22 Feb 23 20:46 PST |                     |
	|         | busybox-6b86dd6d48-mhxxv -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 <nil>                                |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/22 20:45:04
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0222 20:45:04.991762    8582 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:45:04.991911    8582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:45:04.991916    8582 out.go:309] Setting ErrFile to fd 2...
	I0222 20:45:04.991921    8582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:45:04.992030    8582 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 20:45:04.993498    8582 out.go:303] Setting JSON to false
	I0222 20:45:05.012255    8582 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2680,"bootTime":1677124825,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 20:45:05.012349    8582 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 20:45:05.033766    8582 out.go:177] * [multinode-216000] minikube v1.29.0 on Darwin 13.2
	I0222 20:45:05.076206    8582 notify.go:220] Checking for updates...
	I0222 20:45:05.099860    8582 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 20:45:05.120007    8582 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:05.142037    8582 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 20:45:05.164182    8582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 20:45:05.186248    8582 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 20:45:05.207836    8582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 20:45:05.229298    8582 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 20:45:05.289287    8582 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 20:45:05.289412    8582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:45:05.434754    8582 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 04:45:05.341733097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:45:05.477485    8582 out.go:177] * Using the docker driver based on user configuration
	I0222 20:45:05.498783    8582 start.go:296] selected driver: docker
	I0222 20:45:05.498808    8582 start.go:857] validating driver "docker" against <nil>
	I0222 20:45:05.498827    8582 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 20:45:05.502805    8582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:45:05.643740    8582 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 04:45:05.552070913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:45:05.643851    8582 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0222 20:45:05.644016    8582 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0222 20:45:05.665866    8582 out.go:177] * Using Docker Desktop driver with root privileges
	I0222 20:45:05.687438    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:45:05.687466    8582 cni.go:136] 0 nodes found, recommending kindnet
	I0222 20:45:05.687476    8582 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0222 20:45:05.687499    8582 start_flags.go:319] config:
	{Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:45:05.709548    8582 out.go:177] * Starting control plane node multinode-216000 in cluster multinode-216000
	I0222 20:45:05.731582    8582 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 20:45:05.753496    8582 out.go:177] * Pulling base image ...
	I0222 20:45:05.795718    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:45:05.795782    8582 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 20:45:05.795831    8582 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0222 20:45:05.795849    8582 cache.go:57] Caching tarball of preloaded images
	I0222 20:45:05.796078    8582 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 20:45:05.796098    8582 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0222 20:45:05.800060    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:45:05.800099    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json: {Name:mk00bbe28257c4f32206da7d58c62be073f76fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:05.851536    8582 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 20:45:05.851554    8582 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 20:45:05.851573    8582 cache.go:193] Successfully downloaded all kic artifacts
	I0222 20:45:05.851611    8582 start.go:364] acquiring machines lock for multinode-216000: {Name:mk63d9e74b465394c1d51e2bb23e39dc13c4550b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 20:45:05.851759    8582 start.go:368] acquired machines lock for "multinode-216000" in 135.387µs
	I0222 20:45:05.851791    8582 start.go:93] Provisioning new machine with config: &{Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 20:45:05.851856    8582 start.go:125] createHost starting for "" (driver="docker")
	I0222 20:45:05.873595    8582 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0222 20:45:05.873894    8582 start.go:159] libmachine.API.Create for "multinode-216000" (driver="docker")
	I0222 20:45:05.873940    8582 client.go:168] LocalClient.Create starting
	I0222 20:45:05.874111    8582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem
	I0222 20:45:05.874189    8582 main.go:141] libmachine: Decoding PEM data...
	I0222 20:45:05.874221    8582 main.go:141] libmachine: Parsing certificate...
	I0222 20:45:05.874365    8582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem
	I0222 20:45:05.874437    8582 main.go:141] libmachine: Decoding PEM data...
	I0222 20:45:05.874454    8582 main.go:141] libmachine: Parsing certificate...
	I0222 20:45:05.875240    8582 cli_runner.go:164] Run: docker network inspect multinode-216000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0222 20:45:05.929549    8582 cli_runner.go:211] docker network inspect multinode-216000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0222 20:45:05.929656    8582 network_create.go:281] running [docker network inspect multinode-216000] to gather additional debugging logs...
	I0222 20:45:05.929674    8582 cli_runner.go:164] Run: docker network inspect multinode-216000
	W0222 20:45:05.983920    8582 cli_runner.go:211] docker network inspect multinode-216000 returned with exit code 1
	I0222 20:45:05.983954    8582 network_create.go:284] error running [docker network inspect multinode-216000]: docker network inspect multinode-216000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-216000
	I0222 20:45:05.983972    8582 network_create.go:286] output of [docker network inspect multinode-216000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-216000
	
	** /stderr **
	I0222 20:45:05.984073    8582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0222 20:45:06.041131    8582 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0222 20:45:06.041464    8582 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003264e0}
	I0222 20:45:06.041477    8582 network_create.go:123] attempt to create docker network multinode-216000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0222 20:45:06.041550    8582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-216000 multinode-216000
	I0222 20:45:06.129740    8582 network_create.go:107] docker network multinode-216000 192.168.58.0/24 created
	I0222 20:45:06.129772    8582 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-216000" container
	I0222 20:45:06.129900    8582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0222 20:45:06.185448    8582 cli_runner.go:164] Run: docker volume create multinode-216000 --label name.minikube.sigs.k8s.io=multinode-216000 --label created_by.minikube.sigs.k8s.io=true
	I0222 20:45:06.240948    8582 oci.go:103] Successfully created a docker volume multinode-216000
	I0222 20:45:06.241064    8582 cli_runner.go:164] Run: docker run --rm --name multinode-216000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-216000 --entrypoint /usr/bin/test -v multinode-216000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0222 20:45:06.691830    8582 oci.go:107] Successfully prepared a docker volume multinode-216000
	I0222 20:45:06.691869    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:45:06.691885    8582 kic.go:190] Starting extracting preloaded images to volume ...
	I0222 20:45:06.692005    8582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-216000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0222 20:45:12.800878    8582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-216000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.108858273s)
	I0222 20:45:12.800924    8582 kic.go:199] duration metric: took 6.109107 seconds to extract preloaded images to volume
	I0222 20:45:12.801150    8582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0222 20:45:12.945998    8582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-216000 --name multinode-216000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-216000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-216000 --network multinode-216000 --ip 192.168.58.2 --volume multinode-216000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0222 20:45:13.310104    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Running}}
	I0222 20:45:13.372787    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:13.433738    8582 cli_runner.go:164] Run: docker exec multinode-216000 stat /var/lib/dpkg/alternatives/iptables
	I0222 20:45:13.551142    8582 oci.go:144] the created container "multinode-216000" has a running status.
	I0222 20:45:13.551174    8582 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa...
	I0222 20:45:13.685582    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0222 20:45:13.685651    8582 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0222 20:45:13.795275    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:13.853484    8582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0222 20:45:13.853504    8582 kic_runner.go:114] Args: [docker exec --privileged multinode-216000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0222 20:45:13.959164    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:14.017278    8582 machine.go:88] provisioning docker machine ...
	I0222 20:45:14.017318    8582 ubuntu.go:169] provisioning hostname "multinode-216000"
	I0222 20:45:14.017421    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:14.076626    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:14.077017    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:14.077034    8582 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-216000 && echo "multinode-216000" | sudo tee /etc/hostname
	I0222 20:45:14.222328    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-216000
	
	I0222 20:45:14.222426    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:14.279562    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:14.279904    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:14.279919    8582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-216000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-216000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-216000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 20:45:14.415417    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 20:45:14.415444    8582 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 20:45:14.415467    8582 ubuntu.go:177] setting up certificates
	I0222 20:45:14.415476    8582 provision.go:83] configureAuth start
	I0222 20:45:14.415563    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000
	I0222 20:45:14.472439    8582 provision.go:138] copyHostCerts
	I0222 20:45:14.472487    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:45:14.472540    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 20:45:14.472547    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:45:14.472646    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 20:45:14.472807    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:45:14.472854    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 20:45:14.472860    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:45:14.472923    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 20:45:14.473047    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:45:14.473078    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 20:45:14.473083    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:45:14.473146    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 20:45:14.473264    8582 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.multinode-216000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-216000]
	I0222 20:45:14.751737    8582 provision.go:172] copyRemoteCerts
	I0222 20:45:14.751802    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 20:45:14.751850    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:14.813335    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:14.908812    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0222 20:45:14.908913    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 20:45:14.925649    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0222 20:45:14.925738    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0222 20:45:14.943597    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0222 20:45:14.943681    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0222 20:45:14.960896    8582 provision.go:86] duration metric: configureAuth took 545.412695ms
	I0222 20:45:14.960910    8582 ubuntu.go:193] setting minikube options for container-runtime
	I0222 20:45:14.961094    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:45:14.961198    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:15.039112    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:15.039475    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:15.039491    8582 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 20:45:15.175328    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 20:45:15.175341    8582 ubuntu.go:71] root file system type: overlay
	I0222 20:45:15.175434    8582 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 20:45:15.175522    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:15.234498    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:15.234856    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:15.234903    8582 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 20:45:15.380173    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 20:45:15.380257    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:15.438479    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:45:15.438848    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51081 <nil> <nil>}
	I0222 20:45:15.438867    8582 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 20:45:16.068520    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 04:45:15.378684440 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0222 20:45:16.068546    8582 machine.go:91] provisioned docker machine in 2.051270175s
	I0222 20:45:16.068553    8582 client.go:171] LocalClient.Create took 10.194722394s
	I0222 20:45:16.068585    8582 start.go:167] duration metric: libmachine.API.Create for "multinode-216000" took 10.19480731s
	I0222 20:45:16.068594    8582 start.go:300] post-start starting for "multinode-216000" (driver="docker")
	I0222 20:45:16.068600    8582 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 20:45:16.068683    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 20:45:16.068750    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:16.128459    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:16.223890    8582 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 20:45:16.227346    8582 command_runner.go:130] > NAME="Ubuntu"
	I0222 20:45:16.227356    8582 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0222 20:45:16.227360    8582 command_runner.go:130] > ID=ubuntu
	I0222 20:45:16.227365    8582 command_runner.go:130] > ID_LIKE=debian
	I0222 20:45:16.227370    8582 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0222 20:45:16.227373    8582 command_runner.go:130] > VERSION_ID="20.04"
	I0222 20:45:16.227379    8582 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0222 20:45:16.227384    8582 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0222 20:45:16.227388    8582 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0222 20:45:16.227401    8582 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0222 20:45:16.227407    8582 command_runner.go:130] > VERSION_CODENAME=focal
	I0222 20:45:16.227413    8582 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0222 20:45:16.227455    8582 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 20:45:16.227473    8582 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 20:45:16.227481    8582 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 20:45:16.227485    8582 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 20:45:16.227495    8582 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 20:45:16.227592    8582 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 20:45:16.227764    8582 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 20:45:16.227775    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /etc/ssl/certs/31332.pem
	I0222 20:45:16.227979    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 20:45:16.235393    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:45:16.253527    8582 start.go:303] post-start completed in 184.925048ms
	I0222 20:45:16.254077    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000
	I0222 20:45:16.313910    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:45:16.314325    8582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 20:45:16.314389    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:16.373772    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:16.467190    8582 command_runner.go:130] > 9%!
	(MISSING)I0222 20:45:16.467268    8582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 20:45:16.471676    8582 command_runner.go:130] > 51G
	I0222 20:45:16.471999    8582 start.go:128] duration metric: createHost completed in 10.62025762s
	I0222 20:45:16.472013    8582 start.go:83] releasing machines lock for "multinode-216000", held for 10.620367722s
	I0222 20:45:16.472099    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000
	I0222 20:45:16.575326    8582 ssh_runner.go:195] Run: cat /version.json
	I0222 20:45:16.575327    8582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0222 20:45:16.575419    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:16.575449    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:16.638565    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:16.638608    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:16.730737    8582 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0222 20:45:16.730871    8582 ssh_runner.go:195] Run: systemctl --version
	I0222 20:45:16.788478    8582 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0222 20:45:16.788535    8582 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0222 20:45:16.788559    8582 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0222 20:45:16.788648    8582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 20:45:16.793272    8582 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0222 20:45:16.793281    8582 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0222 20:45:16.793286    8582 command_runner.go:130] > Device: a6h/166d	Inode: 393237      Links: 1
	I0222 20:45:16.793291    8582 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0222 20:45:16.793297    8582 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0222 20:45:16.793302    8582 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0222 20:45:16.793306    8582 command_runner.go:130] > Change: 2023-02-23 04:22:34.614629251 +0000
	I0222 20:45:16.793309    8582 command_runner.go:130] >  Birth: -
	I0222 20:45:16.793706    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 20:45:16.814078    8582 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 20:45:16.814148    8582 ssh_runner.go:195] Run: which cri-dockerd
	I0222 20:45:16.818009    8582 command_runner.go:130] > /usr/bin/cri-dockerd
	I0222 20:45:16.818126    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0222 20:45:16.825491    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0222 20:45:16.838269    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0222 20:45:16.852840    8582 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0222 20:45:16.852865    8582 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0222 20:45:16.852876    8582 start.go:485] detecting cgroup driver to use...
	I0222 20:45:16.852888    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:45:16.852968    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:45:16.865179    8582 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0222 20:45:16.865215    8582 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0222 20:45:16.866103    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0222 20:45:16.874487    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 20:45:16.883064    8582 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 20:45:16.883124    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 20:45:16.891544    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:45:16.899891    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 20:45:16.908854    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:45:16.917241    8582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 20:45:16.925052    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 20:45:16.933701    8582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 20:45:16.940344    8582 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0222 20:45:16.941138    8582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 20:45:16.948217    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:45:17.016318    8582 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 20:45:17.088679    8582 start.go:485] detecting cgroup driver to use...
	I0222 20:45:17.088698    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:45:17.088767    8582 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 20:45:17.098323    8582 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0222 20:45:17.098601    8582 command_runner.go:130] > [Unit]
	I0222 20:45:17.098610    8582 command_runner.go:130] > Description=Docker Application Container Engine
	I0222 20:45:17.098615    8582 command_runner.go:130] > Documentation=https://docs.docker.com
	I0222 20:45:17.098620    8582 command_runner.go:130] > BindsTo=containerd.service
	I0222 20:45:17.098627    8582 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0222 20:45:17.098636    8582 command_runner.go:130] > Wants=network-online.target
	I0222 20:45:17.098660    8582 command_runner.go:130] > Requires=docker.socket
	I0222 20:45:17.098674    8582 command_runner.go:130] > StartLimitBurst=3
	I0222 20:45:17.098685    8582 command_runner.go:130] > StartLimitIntervalSec=60
	I0222 20:45:17.098699    8582 command_runner.go:130] > [Service]
	I0222 20:45:17.098709    8582 command_runner.go:130] > Type=notify
	I0222 20:45:17.098725    8582 command_runner.go:130] > Restart=on-failure
	I0222 20:45:17.098741    8582 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0222 20:45:17.098750    8582 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0222 20:45:17.098756    8582 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0222 20:45:17.098762    8582 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0222 20:45:17.098767    8582 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0222 20:45:17.098772    8582 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0222 20:45:17.098777    8582 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0222 20:45:17.098788    8582 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0222 20:45:17.098794    8582 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0222 20:45:17.098797    8582 command_runner.go:130] > ExecStart=
	I0222 20:45:17.098808    8582 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0222 20:45:17.098813    8582 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0222 20:45:17.098819    8582 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0222 20:45:17.098841    8582 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0222 20:45:17.098849    8582 command_runner.go:130] > LimitNOFILE=infinity
	I0222 20:45:17.098853    8582 command_runner.go:130] > LimitNPROC=infinity
	I0222 20:45:17.098856    8582 command_runner.go:130] > LimitCORE=infinity
	I0222 20:45:17.098865    8582 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0222 20:45:17.098877    8582 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0222 20:45:17.098888    8582 command_runner.go:130] > TasksMax=infinity
	I0222 20:45:17.098895    8582 command_runner.go:130] > TimeoutStartSec=0
	I0222 20:45:17.098902    8582 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0222 20:45:17.098912    8582 command_runner.go:130] > Delegate=yes
	I0222 20:45:17.098924    8582 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0222 20:45:17.098933    8582 command_runner.go:130] > KillMode=process
	I0222 20:45:17.098945    8582 command_runner.go:130] > [Install]
	I0222 20:45:17.098951    8582 command_runner.go:130] > WantedBy=multi-user.target
	I0222 20:45:17.099285    8582 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 20:45:17.099367    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 20:45:17.110324    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:45:17.123770    8582 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0222 20:45:17.123794    8582 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0222 20:45:17.124722    8582 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 20:45:17.231373    8582 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 20:45:17.294094    8582 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 20:45:17.294115    8582 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 20:45:17.331972    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:45:17.429640    8582 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 20:45:17.653587    8582 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 20:45:17.727213    8582 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0222 20:45:17.727382    8582 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0222 20:45:17.796152    8582 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 20:45:17.866182    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:45:17.936312    8582 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0222 20:45:17.956762    8582 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0222 20:45:17.956849    8582 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0222 20:45:17.960898    8582 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0222 20:45:17.960909    8582 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0222 20:45:17.960915    8582 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0222 20:45:17.960920    8582 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0222 20:45:17.960928    8582 command_runner.go:130] > Access: 2023-02-23 04:45:17.943684243 +0000
	I0222 20:45:17.960936    8582 command_runner.go:130] > Modify: 2023-02-23 04:45:17.943684243 +0000
	I0222 20:45:17.960941    8582 command_runner.go:130] > Change: 2023-02-23 04:45:17.953684243 +0000
	I0222 20:45:17.960944    8582 command_runner.go:130] >  Birth: -
	I0222 20:45:17.960964    8582 start.go:553] Will wait 60s for crictl version
	I0222 20:45:17.961005    8582 ssh_runner.go:195] Run: which crictl
	I0222 20:45:17.964655    8582 command_runner.go:130] > /usr/bin/crictl
	I0222 20:45:17.964838    8582 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0222 20:45:18.060867    8582 command_runner.go:130] > Version:  0.1.0
	I0222 20:45:18.060879    8582 command_runner.go:130] > RuntimeName:  docker
	I0222 20:45:18.060884    8582 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0222 20:45:18.060889    8582 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0222 20:45:18.062862    8582 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0222 20:45:18.062943    8582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:45:18.086461    8582 command_runner.go:130] > 23.0.1
	I0222 20:45:18.088071    8582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:45:18.110793    8582 command_runner.go:130] > 23.0.1
	I0222 20:45:18.155382    8582 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0222 20:45:18.155536    8582 cli_runner.go:164] Run: docker exec -t multinode-216000 dig +short host.docker.internal
	I0222 20:45:18.267541    8582 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 20:45:18.267659    8582 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 20:45:18.272361    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:45:18.282378    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:18.341346    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:45:18.341427    8582 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 20:45:18.360199    8582 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0222 20:45:18.360212    8582 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0222 20:45:18.360217    8582 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0222 20:45:18.360224    8582 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0222 20:45:18.360239    8582 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0222 20:45:18.360244    8582 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0222 20:45:18.360250    8582 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0222 20:45:18.360256    8582 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 20:45:18.362067    8582 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0222 20:45:18.362082    8582 docker.go:560] Images already preloaded, skipping extraction
	I0222 20:45:18.362183    8582 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 20:45:18.380254    8582 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0222 20:45:18.380274    8582 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0222 20:45:18.380282    8582 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0222 20:45:18.380292    8582 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0222 20:45:18.380299    8582 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0222 20:45:18.380306    8582 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0222 20:45:18.380315    8582 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0222 20:45:18.380328    8582 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 20:45:18.381893    8582 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0222 20:45:18.381905    8582 cache_images.go:84] Images are preloaded, skipping loading
	I0222 20:45:18.381998    8582 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 20:45:18.406094    8582 command_runner.go:130] > cgroupfs
	I0222 20:45:18.407712    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:45:18.407725    8582 cni.go:136] 1 nodes found, recommending kindnet
	I0222 20:45:18.407744    8582 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 20:45:18.407765    8582 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-216000 NodeName:multinode-216000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 20:45:18.407883    8582 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-216000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 20:45:18.407963    8582 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-216000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 20:45:18.408033    8582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0222 20:45:18.415368    8582 command_runner.go:130] > kubeadm
	I0222 20:45:18.415380    8582 command_runner.go:130] > kubectl
	I0222 20:45:18.415386    8582 command_runner.go:130] > kubelet
	I0222 20:45:18.416303    8582 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 20:45:18.416357    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 20:45:18.423829    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0222 20:45:18.437399    8582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 20:45:18.450918    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0222 20:45:18.464004    8582 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0222 20:45:18.468378    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:45:18.478968    8582 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000 for IP: 192.168.58.2
	I0222 20:45:18.479007    8582 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.479233    8582 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 20:45:18.479298    8582 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 20:45:18.479350    8582 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key
	I0222 20:45:18.479363    8582 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt with IP's: []
	I0222 20:45:18.807872    8582 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt ...
	I0222 20:45:18.807890    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt: {Name:mk734ac8a5dfe0a534e9eb7b833d4a5e48c8bc37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.808232    8582 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key ...
	I0222 20:45:18.808240    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key: {Name:mka355a8e15740137d1e2e5ff0e4b2c22c313a89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.808486    8582 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key.cee25041
	I0222 20:45:18.808503    8582 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0222 20:45:18.994693    8582 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt.cee25041 ...
	I0222 20:45:18.994706    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt.cee25041: {Name:mk63b66cc283eb07720bb76a77d00d37e04a39d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.994973    8582 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key.cee25041 ...
	I0222 20:45:18.994983    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key.cee25041: {Name:mk8329c082e2a26c2595c267885c85db2235c6f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:18.995165    8582 certs.go:333] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt
	I0222 20:45:18.995350    8582 certs.go:337] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key
	I0222 20:45:18.995515    8582 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key
	I0222 20:45:18.995532    8582 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt with IP's: []
	I0222 20:45:19.113820    8582 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt ...
	I0222 20:45:19.113828    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt: {Name:mk8138fac670db3215d5364fec33c5ab93eb8c0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:19.114028    8582 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key ...
	I0222 20:45:19.114036    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key: {Name:mk5e8e4f17feb0310021a3cb9d6f540378c4c54b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:19.114212    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0222 20:45:19.114240    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0222 20:45:19.114260    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0222 20:45:19.114282    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0222 20:45:19.114301    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0222 20:45:19.114320    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0222 20:45:19.114337    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0222 20:45:19.114355    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0222 20:45:19.114445    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 20:45:19.114491    8582 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 20:45:19.114502    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 20:45:19.114536    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 20:45:19.114565    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 20:45:19.114593    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 20:45:19.114658    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:45:19.114691    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.114711    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem -> /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.114730    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.115133    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 20:45:19.135159    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0222 20:45:19.153613    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 20:45:19.170940    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0222 20:45:19.188737    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 20:45:19.206355    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 20:45:19.224583    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 20:45:19.242301    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 20:45:19.260332    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 20:45:19.278549    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 20:45:19.295847    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 20:45:19.313581    8582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 20:45:19.326802    8582 ssh_runner.go:195] Run: openssl version
	I0222 20:45:19.332108    8582 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0222 20:45:19.332474    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 20:45:19.340663    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.344543    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.344695    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.344737    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:45:19.350047    8582 command_runner.go:130] > b5213941
	I0222 20:45:19.350235    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 20:45:19.358345    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 20:45:19.366377    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.370383    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.370487    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.370538    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 20:45:19.375953    8582 command_runner.go:130] > 51391683
	I0222 20:45:19.376472    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 20:45:19.384513    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 20:45:19.393148    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.397295    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.397427    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.397484    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 20:45:19.402536    8582 command_runner.go:130] > 3ec20f2e
	I0222 20:45:19.402861    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 20:45:19.411005    8582 kubeadm.go:401] StartCluster: {Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:45:19.411119    8582 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 20:45:19.430184    8582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 20:45:19.438275    8582 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0222 20:45:19.438286    8582 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0222 20:45:19.438291    8582 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0222 20:45:19.438350    8582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 20:45:19.445915    8582 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 20:45:19.445966    8582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 20:45:19.453427    8582 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0222 20:45:19.453448    8582 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0222 20:45:19.453455    8582 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0222 20:45:19.453461    8582 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 20:45:19.453483    8582 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 20:45:19.453502    8582 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 20:45:19.505948    8582 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0222 20:45:19.505948    8582 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0222 20:45:19.505994    8582 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 20:45:19.506008    8582 command_runner.go:130] > [preflight] Running pre-flight checks
	I0222 20:45:19.613107    8582 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 20:45:19.613126    8582 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 20:45:19.613208    8582 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 20:45:19.613217    8582 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 20:45:19.613306    8582 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 20:45:19.613324    8582 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 20:45:19.743015    8582 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 20:45:19.743037    8582 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 20:45:19.786700    8582 out.go:204]   - Generating certificates and keys ...
	I0222 20:45:19.786782    8582 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0222 20:45:19.786805    8582 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 20:45:19.786865    8582 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0222 20:45:19.786871    8582 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 20:45:19.850709    8582 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0222 20:45:19.850717    8582 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0222 20:45:19.987582    8582 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0222 20:45:19.987592    8582 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0222 20:45:20.111956    8582 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0222 20:45:20.111984    8582 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0222 20:45:20.367090    8582 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0222 20:45:20.367149    8582 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0222 20:45:20.460324    8582 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0222 20:45:20.460337    8582 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0222 20:45:20.460440    8582 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-216000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0222 20:45:20.460448    8582 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-216000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0222 20:45:20.634190    8582 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0222 20:45:20.634208    8582 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0222 20:45:20.634336    8582 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-216000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0222 20:45:20.634348    8582 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-216000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0222 20:45:20.712988    8582 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0222 20:45:20.713007    8582 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0222 20:45:20.783015    8582 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0222 20:45:20.783030    8582 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0222 20:45:20.893467    8582 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0222 20:45:20.893477    8582 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0222 20:45:20.893513    8582 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 20:45:20.893518    8582 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 20:45:21.008488    8582 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 20:45:21.008501    8582 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 20:45:21.326406    8582 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 20:45:21.326428    8582 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 20:45:21.451239    8582 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 20:45:21.451254    8582 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 20:45:21.784843    8582 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 20:45:21.784877    8582 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 20:45:21.797053    8582 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 20:45:21.797084    8582 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 20:45:21.798086    8582 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 20:45:21.798097    8582 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 20:45:21.798144    8582 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0222 20:45:21.798154    8582 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0222 20:45:21.874401    8582 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 20:45:21.874415    8582 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 20:45:21.924722    8582 out.go:204]   - Booting up control plane ...
	I0222 20:45:21.924875    8582 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 20:45:21.924941    8582 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 20:45:21.925026    8582 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 20:45:21.925031    8582 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 20:45:21.925080    8582 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 20:45:21.925091    8582 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 20:45:21.925218    8582 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 20:45:21.925227    8582 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 20:45:21.925379    8582 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 20:45:21.925412    8582 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 20:45:30.880182    8582 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.001447 seconds
	I0222 20:45:30.880189    8582 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.001447 seconds
	I0222 20:45:30.880320    8582 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0222 20:45:30.880330    8582 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0222 20:45:30.891029    8582 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0222 20:45:30.891059    8582 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0222 20:45:31.408095    8582 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0222 20:45:31.408105    8582 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0222 20:45:31.408264    8582 kubeadm.go:322] [mark-control-plane] Marking the node multinode-216000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0222 20:45:31.408278    8582 command_runner.go:130] > [mark-control-plane] Marking the node multinode-216000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0222 20:45:31.917178    8582 kubeadm.go:322] [bootstrap-token] Using token: 5jwevw.jx77rxsr3wyi2rry
	I0222 20:45:31.917198    8582 command_runner.go:130] > [bootstrap-token] Using token: 5jwevw.jx77rxsr3wyi2rry
	I0222 20:45:31.954318    8582 out.go:204]   - Configuring RBAC rules ...
	I0222 20:45:31.954483    8582 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0222 20:45:31.954497    8582 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0222 20:45:31.957489    8582 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0222 20:45:31.957508    8582 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0222 20:45:31.999853    8582 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0222 20:45:31.999861    8582 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0222 20:45:32.002246    8582 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0222 20:45:32.002251    8582 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0222 20:45:32.004573    8582 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0222 20:45:32.004583    8582 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0222 20:45:32.006862    8582 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0222 20:45:32.006871    8582 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0222 20:45:32.015273    8582 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0222 20:45:32.015289    8582 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0222 20:45:32.155838    8582 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0222 20:45:32.155852    8582 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0222 20:45:32.361026    8582 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0222 20:45:32.361044    8582 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0222 20:45:32.361629    8582 kubeadm.go:322] 
	I0222 20:45:32.361720    8582 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0222 20:45:32.361732    8582 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0222 20:45:32.361742    8582 kubeadm.go:322] 
	I0222 20:45:32.361800    8582 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0222 20:45:32.361816    8582 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0222 20:45:32.361823    8582 kubeadm.go:322] 
	I0222 20:45:32.361845    8582 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0222 20:45:32.361850    8582 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0222 20:45:32.361900    8582 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0222 20:45:32.361904    8582 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0222 20:45:32.361943    8582 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0222 20:45:32.361951    8582 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0222 20:45:32.361965    8582 kubeadm.go:322] 
	I0222 20:45:32.362025    8582 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0222 20:45:32.362032    8582 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0222 20:45:32.362038    8582 kubeadm.go:322] 
	I0222 20:45:32.362101    8582 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0222 20:45:32.362109    8582 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0222 20:45:32.362120    8582 kubeadm.go:322] 
	I0222 20:45:32.362169    8582 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0222 20:45:32.362176    8582 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0222 20:45:32.362264    8582 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0222 20:45:32.362275    8582 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0222 20:45:32.362369    8582 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0222 20:45:32.362373    8582 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0222 20:45:32.362385    8582 kubeadm.go:322] 
	I0222 20:45:32.362444    8582 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0222 20:45:32.362449    8582 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0222 20:45:32.362507    8582 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0222 20:45:32.362509    8582 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0222 20:45:32.362516    8582 kubeadm.go:322] 
	I0222 20:45:32.362570    8582 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 5jwevw.jx77rxsr3wyi2rry \
	I0222 20:45:32.362574    8582 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5jwevw.jx77rxsr3wyi2rry \
	I0222 20:45:32.362650    8582 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf \
	I0222 20:45:32.362656    8582 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf \
	I0222 20:45:32.362670    8582 command_runner.go:130] > 	--control-plane 
	I0222 20:45:32.362673    8582 kubeadm.go:322] 	--control-plane 
	I0222 20:45:32.362676    8582 kubeadm.go:322] 
	I0222 20:45:32.362745    8582 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0222 20:45:32.362759    8582 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0222 20:45:32.362771    8582 kubeadm.go:322] 
	I0222 20:45:32.362848    8582 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5jwevw.jx77rxsr3wyi2rry \
	I0222 20:45:32.362858    8582 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5jwevw.jx77rxsr3wyi2rry \
	I0222 20:45:32.362965    8582 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf 
	I0222 20:45:32.362978    8582 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf 
	I0222 20:45:32.419040    8582 kubeadm.go:322] W0223 04:45:19.499041    1297 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0222 20:45:32.419068    8582 command_runner.go:130] ! W0223 04:45:19.499041    1297 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0222 20:45:32.419231    8582 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0222 20:45:32.419250    8582 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0222 20:45:32.419421    8582 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 20:45:32.419430    8582 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 20:45:32.419451    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:45:32.419464    8582 cni.go:136] 1 nodes found, recommending kindnet
	I0222 20:45:32.458559    8582 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0222 20:45:32.501679    8582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0222 20:45:32.506820    8582 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0222 20:45:32.506836    8582 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0222 20:45:32.506847    8582 command_runner.go:130] > Device: a6h/166d	Inode: 267135      Links: 1
	I0222 20:45:32.506878    8582 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0222 20:45:32.506891    8582 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0222 20:45:32.506911    8582 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0222 20:45:32.506918    8582 command_runner.go:130] > Change: 2023-02-23 04:22:33.946629303 +0000
	I0222 20:45:32.506922    8582 command_runner.go:130] >  Birth: -
	I0222 20:45:32.506955    8582 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0222 20:45:32.506965    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0222 20:45:32.520767    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0222 20:45:33.085605    8582 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0222 20:45:33.089214    8582 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0222 20:45:33.094865    8582 command_runner.go:130] > serviceaccount/kindnet created
	I0222 20:45:33.101971    8582 command_runner.go:130] > daemonset.apps/kindnet created
	I0222 20:45:33.108092    8582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0222 20:45:33.108197    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:33.108199    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321 minikube.k8s.io/name=multinode-216000 minikube.k8s.io/updated_at=2023_02_22T20_45_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:33.120620    8582 command_runner.go:130] > -16
	I0222 20:45:33.120829    8582 ops.go:34] apiserver oom_adj: -16
	I0222 20:45:33.235389    8582 command_runner.go:130] > node/multinode-216000 labeled
	I0222 20:45:33.235429    8582 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0222 20:45:33.235530    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:33.298748    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:33.798947    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:33.863655    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:34.299929    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:34.366381    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:34.799577    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:34.862911    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:35.301034    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:35.366783    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:35.799539    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:35.866693    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:36.299697    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:36.362551    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:36.798880    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:36.861454    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:37.299204    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:37.363129    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:37.799017    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:37.865543    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:38.299118    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:38.366698    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:38.799082    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:38.863705    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:39.300415    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:39.362843    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:39.800939    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:39.865957    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:40.299731    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:40.363412    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:40.799062    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:40.863422    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:41.300979    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:41.364245    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:41.798819    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:41.862910    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:42.301010    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:42.368353    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:42.799757    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:42.860598    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:43.298910    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:43.358655    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:43.798911    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:43.862382    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:44.298810    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:44.367729    8582 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0222 20:45:44.800895    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 20:45:44.878473    8582 command_runner.go:130] > NAME      SECRETS   AGE
	I0222 20:45:44.878486    8582 command_runner.go:130] > default   0         0s
	I0222 20:45:44.878501    8582 kubeadm.go:1073] duration metric: took 11.770516906s to wait for elevateKubeSystemPrivileges.
	I0222 20:45:44.878511    8582 kubeadm.go:403] StartCluster complete in 25.46780247s
	I0222 20:45:44.878534    8582 settings.go:142] acquiring lock: {Name:mk09b0ae3061a5d1df7256089aea48f15d65cbf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:44.878624    8582 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:44.879095    8582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:45:44.879359    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0222 20:45:44.879382    8582 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0222 20:45:44.879439    8582 addons.go:65] Setting storage-provisioner=true in profile "multinode-216000"
	I0222 20:45:44.879459    8582 addons.go:227] Setting addon storage-provisioner=true in "multinode-216000"
	I0222 20:45:44.879460    8582 addons.go:65] Setting default-storageclass=true in profile "multinode-216000"
	I0222 20:45:44.879476    8582 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-216000"
	I0222 20:45:44.879500    8582 host.go:66] Checking if "multinode-216000" exists ...
	I0222 20:45:44.879515    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:45:44.879557    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:44.879742    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:44.879842    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:44.879816    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:45:44.883685    8582 cert_rotation.go:137] Starting client certificate rotation controller
	I0222 20:45:44.883956    8582 round_trippers.go:463] GET https://127.0.0.1:51085/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0222 20:45:44.883966    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:44.883974    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:44.883981    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:44.925690    8582 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0222 20:45:44.925708    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:44.925715    8582 round_trippers.go:580]     Audit-Id: 347f7f79-722b-4f7c-88bc-d8ed156f5606
	I0222 20:45:44.925722    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:44.925726    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:44.925731    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:44.925737    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:44.925746    8582 round_trippers.go:580]     Content-Length: 291
	I0222 20:45:44.925752    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:44 GMT
	I0222 20:45:44.925793    8582 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"302","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0222 20:45:44.926183    8582 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"302","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0222 20:45:44.926214    8582 round_trippers.go:463] PUT https://127.0.0.1:51085/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0222 20:45:44.926218    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:44.926224    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:44.926230    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:44.926235    8582 round_trippers.go:473]     Content-Type: application/json
	I0222 20:45:44.933928    8582 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0222 20:45:44.933947    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:44.933960    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:44.933968    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:44.933986    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:44.933994    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:44.934002    8582 round_trippers.go:580]     Content-Length: 291
	I0222 20:45:44.934010    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:44 GMT
	I0222 20:45:44.934017    8582 round_trippers.go:580]     Audit-Id: 73608203-7243-4885-ac82-d5f47c1f08dd
	I0222 20:45:44.934036    8582 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"328","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0222 20:45:44.979026    8582 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 20:45:44.954997    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:44.979427    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:45:45.015387    8582 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 20:45:45.015937    8582 round_trippers.go:463] GET https://127.0.0.1:51085/apis/storage.k8s.io/v1/storageclasses
	I0222 20:45:45.053309    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:45.053287    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0222 20:45:45.053323    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:45.053333    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:45.053446    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:45.056950    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:45.056983    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:45.056994    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:45.057002    8582 round_trippers.go:580]     Content-Length: 109
	I0222 20:45:45.057010    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:45 GMT
	I0222 20:45:45.057017    8582 round_trippers.go:580]     Audit-Id: 5f21578f-5a11-46bc-83cd-cf8aed0de574
	I0222 20:45:45.057024    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:45.057033    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:45.057045    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:45.057078    8582 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"336"},"items":[]}
	I0222 20:45:45.057427    8582 addons.go:227] Setting addon default-storageclass=true in "multinode-216000"
	I0222 20:45:45.057470    8582 host.go:66] Checking if "multinode-216000" exists ...
	I0222 20:45:45.057994    8582 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:45:45.065903    8582 command_runner.go:130] > apiVersion: v1
	I0222 20:45:45.065956    8582 command_runner.go:130] > data:
	I0222 20:45:45.065966    8582 command_runner.go:130] >   Corefile: |
	I0222 20:45:45.065972    8582 command_runner.go:130] >     .:53 {
	I0222 20:45:45.065977    8582 command_runner.go:130] >         errors
	I0222 20:45:45.065991    8582 command_runner.go:130] >         health {
	I0222 20:45:45.066007    8582 command_runner.go:130] >            lameduck 5s
	I0222 20:45:45.066017    8582 command_runner.go:130] >         }
	I0222 20:45:45.066023    8582 command_runner.go:130] >         ready
	I0222 20:45:45.066037    8582 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0222 20:45:45.066050    8582 command_runner.go:130] >            pods insecure
	I0222 20:45:45.066063    8582 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0222 20:45:45.066077    8582 command_runner.go:130] >            ttl 30
	I0222 20:45:45.066086    8582 command_runner.go:130] >         }
	I0222 20:45:45.066101    8582 command_runner.go:130] >         prometheus :9153
	I0222 20:45:45.066116    8582 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0222 20:45:45.066133    8582 command_runner.go:130] >            max_concurrent 1000
	I0222 20:45:45.066148    8582 command_runner.go:130] >         }
	I0222 20:45:45.066157    8582 command_runner.go:130] >         cache 30
	I0222 20:45:45.066162    8582 command_runner.go:130] >         loop
	I0222 20:45:45.066170    8582 command_runner.go:130] >         reload
	I0222 20:45:45.066178    8582 command_runner.go:130] >         loadbalance
	I0222 20:45:45.066182    8582 command_runner.go:130] >     }
	I0222 20:45:45.066185    8582 command_runner.go:130] > kind: ConfigMap
	I0222 20:45:45.066189    8582 command_runner.go:130] > metadata:
	I0222 20:45:45.066209    8582 command_runner.go:130] >   creationTimestamp: "2023-02-23T04:45:32Z"
	I0222 20:45:45.066219    8582 command_runner.go:130] >   name: coredns
	I0222 20:45:45.066224    8582 command_runner.go:130] >   namespace: kube-system
	I0222 20:45:45.066228    8582 command_runner.go:130] >   resourceVersion: "229"
	I0222 20:45:45.066236    8582 command_runner.go:130] >   uid: 870d5158-e67f-46a4-a4ff-0208e33d2315
	I0222 20:45:45.066492    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0222 20:45:45.125893    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:45.130388    8582 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0222 20:45:45.130402    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0222 20:45:45.130486    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:45.201315    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:45:45.335067    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 20:45:45.435009    8582 round_trippers.go:463] GET https://127.0.0.1:51085/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0222 20:45:45.435028    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:45.435034    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:45.435039    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:45.438380    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0222 20:45:45.438666    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:45.438680    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:45.438688    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:45.438700    8582 round_trippers.go:580]     Content-Length: 291
	I0222 20:45:45.438708    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:45 GMT
	I0222 20:45:45.438717    8582 round_trippers.go:580]     Audit-Id: a97bba55-b87b-4670-bd3c-38900e852e3e
	I0222 20:45:45.438733    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:45.438745    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:45.438757    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:45.439026    8582 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"355","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0222 20:45:45.439106    8582 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-216000" context rescaled to 1 replicas
	I0222 20:45:45.439132    8582 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 20:45:45.464740    8582 out.go:177] * Verifying Kubernetes components...
	I0222 20:45:45.505682    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:45:45.638198    8582 command_runner.go:130] > configmap/coredns replaced
	I0222 20:45:45.638228    8582 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0222 20:45:45.854922    8582 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0222 20:45:45.859426    8582 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0222 20:45:45.924955    8582 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0222 20:45:45.933318    8582 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0222 20:45:45.944055    8582 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0222 20:45:45.951160    8582 command_runner.go:130] > pod/storage-provisioner created
	I0222 20:45:45.957315    8582 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0222 20:45:45.981431    8582 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0222 20:45:45.957508    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:45:46.022904    8582 addons.go:492] enable addons completed in 1.143556258s: enabled=[storage-provisioner default-storageclass]
	I0222 20:45:46.097640    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:45:46.097888    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:45:46.098139    8582 node_ready.go:35] waiting up to 6m0s for node "multinode-216000" to be "Ready" ...
	I0222 20:45:46.098185    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:46.098190    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.098197    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.098203    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.119497    8582 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0222 20:45:46.119522    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.119532    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.119540    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.119565    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.119579    8582 round_trippers.go:580]     Audit-Id: 38eecc37-9749-47c3-817c-ca66d0e05505
	I0222 20:45:46.119589    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.119598    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.119732    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:46.120270    8582 node_ready.go:49] node "multinode-216000" has status "Ready":"True"
	I0222 20:45:46.120279    8582 node_ready.go:38] duration metric: took 22.126807ms waiting for node "multinode-216000" to be "Ready" ...
	I0222 20:45:46.120286    8582 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 20:45:46.120346    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:45:46.120352    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.120358    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.120365    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.124743    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:46.124764    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.124774    8582 round_trippers.go:580]     Audit-Id: dcefd3fe-1aa3-43e7-8c44-9a1faf1edc15
	I0222 20:45:46.124804    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.124832    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.124840    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.124847    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.124883    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.126729    8582 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"371"},"items":[{"metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 60448 chars]
	I0222 20:45:46.130680    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-48v9r" in "kube-system" namespace to be "Ready" ...
	I0222 20:45:46.130742    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:46.130748    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.130755    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.130760    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.135655    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:46.135669    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.135675    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.135685    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.135696    8582 round_trippers.go:580]     Audit-Id: 7deac8ba-52f4-4761-9cd5-feb96461e1f2
	I0222 20:45:46.135709    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.135724    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.135736    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.135863    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0222 20:45:46.136179    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:46.136206    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.136213    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.136218    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.139514    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:46.139532    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.139538    8582 round_trippers.go:580]     Audit-Id: cd711aab-4a7d-4496-ae61-bf22d5d792de
	I0222 20:45:46.139543    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.139547    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.139552    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.139556    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.139561    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.139650    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:46.640023    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:46.640045    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.640052    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.640058    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.642847    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:46.642878    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.642902    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.642915    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.642926    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.642951    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.642985    8582 round_trippers.go:580]     Audit-Id: 05ebcc17-1838-4e58-82e7-bb5f19bcd5a7
	I0222 20:45:46.643001    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.643690    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0222 20:45:46.644241    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:46.644249    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:46.644257    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:46.644265    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:46.647661    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:46.647676    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:46.647683    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:46.647691    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:46.647698    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:46.647705    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:46.647712    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:46 GMT
	I0222 20:45:46.647718    8582 round_trippers.go:580]     Audit-Id: ea628f87-01ea-40ee-a670-8c3b3915e5ea
	I0222 20:45:46.648144    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:47.140073    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:47.140099    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:47.140147    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:47.140165    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:47.143559    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:47.143569    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:47.143575    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:47.143580    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:47.143586    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:47 GMT
	I0222 20:45:47.143591    8582 round_trippers.go:580]     Audit-Id: 734c800c-2176-498b-8fc1-2f1161d9cff5
	I0222 20:45:47.143596    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:47.143601    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:47.143671    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0222 20:45:47.143954    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:47.143961    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:47.143967    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:47.143973    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:47.146088    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:47.146097    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:47.146106    8582 round_trippers.go:580]     Audit-Id: 3176398c-0784-49d6-b926-578bd3a67013
	I0222 20:45:47.146112    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:47.146117    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:47.146122    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:47.146127    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:47.146132    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:47 GMT
	I0222 20:45:47.146199    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:47.640067    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:47.640079    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:47.640087    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:47.640093    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:47.644743    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:47.644763    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:47.644773    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:47.644781    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:47 GMT
	I0222 20:45:47.644793    8582 round_trippers.go:580]     Audit-Id: f476c7a2-5334-4344-8743-7b08cd258212
	I0222 20:45:47.644819    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:47.644837    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:47.644862    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:47.645938    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"356","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0222 20:45:47.646368    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:47.646377    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:47.646384    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:47.646390    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:47.649640    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:47.649659    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:47.649669    8582 round_trippers.go:580]     Audit-Id: e37bb264-5ac4-4b01-81cf-88d5d955268c
	I0222 20:45:47.649678    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:47.649689    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:47.649698    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:47.649721    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:47.649781    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:47 GMT
	I0222 20:45:47.649894    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:48.140047    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:48.140061    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:48.140068    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:48.140073    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:48.144873    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:48.144892    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:48.144900    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:48.144905    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:48.144910    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:48.144915    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:48.144920    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:48 GMT
	I0222 20:45:48.144925    8582 round_trippers.go:580]     Audit-Id: 0a22433c-d6ba-4883-9080-1f85496f5899
	I0222 20:45:48.144996    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:48.145282    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:48.145289    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:48.145295    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:48.145300    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:48.147529    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:48.147539    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:48.147547    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:48 GMT
	I0222 20:45:48.147553    8582 round_trippers.go:580]     Audit-Id: 3cc2ed8c-25e0-43c4-bf4d-cbfcc31c44d5
	I0222 20:45:48.147559    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:48.147564    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:48.147569    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:48.147575    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:48.147839    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:48.148072    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:48.639988    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:48.640004    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:48.640011    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:48.640017    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:48.642844    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:48.642858    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:48.642871    8582 round_trippers.go:580]     Audit-Id: cb6099cb-4417-4df1-a82e-2d352ceb186b
	I0222 20:45:48.642877    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:48.642883    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:48.642889    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:48.642894    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:48.642900    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:48 GMT
	I0222 20:45:48.642992    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:48.643332    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:48.643343    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:48.643352    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:48.643359    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:48.646131    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:48.646144    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:48.646150    8582 round_trippers.go:580]     Audit-Id: 40601f97-c81e-471d-9c5c-1e57768a6604
	I0222 20:45:48.646154    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:48.646159    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:48.646164    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:48.646169    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:48.646177    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:48 GMT
	I0222 20:45:48.646318    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:49.140798    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:49.140823    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:49.140836    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:49.140847    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:49.144682    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:49.144703    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:49.144714    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:49.144720    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:49.144726    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:49 GMT
	I0222 20:45:49.144732    8582 round_trippers.go:580]     Audit-Id: c7742e65-0312-4ba6-a36b-51642ef4c9e2
	I0222 20:45:49.144739    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:49.144744    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:49.144861    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:49.145138    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:49.145146    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:49.145152    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:49.145159    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:49.147541    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:49.147550    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:49.147556    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:49.147561    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:49.147566    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:49.147571    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:49.147576    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:49 GMT
	I0222 20:45:49.147581    8582 round_trippers.go:580]     Audit-Id: cd3870c9-b398-4ab4-9270-8264a4a8781f
	I0222 20:45:49.147639    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:49.641195    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:49.641211    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:49.641220    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:49.641227    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:49.644399    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:49.644419    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:49.644427    8582 round_trippers.go:580]     Audit-Id: 4cba9a74-2618-4c3c-9e93-80bef23ee618
	I0222 20:45:49.644432    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:49.644437    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:49.644442    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:49.644449    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:49.644458    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:49 GMT
	I0222 20:45:49.644545    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:49.644827    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:49.644834    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:49.644840    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:49.644846    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:49.646864    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:49.646875    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:49.646885    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:49.646891    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:49 GMT
	I0222 20:45:49.646896    8582 round_trippers.go:580]     Audit-Id: db72863c-093a-4398-b0e4-7ac468ffd6f8
	I0222 20:45:49.646901    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:49.646906    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:49.646911    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:49.646992    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:50.140677    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:50.140702    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:50.140799    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:50.140814    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:50.145454    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:50.145467    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:50.145473    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:50.145484    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:50.145489    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:50.145495    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:50.145499    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:50 GMT
	I0222 20:45:50.145505    8582 round_trippers.go:580]     Audit-Id: 2cb351e4-aabf-49f8-9655-18f70c7b2a3c
	I0222 20:45:50.145568    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:50.145869    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:50.145875    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:50.145881    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:50.145893    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:50.148111    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:50.148121    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:50.148126    8582 round_trippers.go:580]     Audit-Id: 31f35fb7-7828-4db1-ab65-b1fe682ef8ac
	I0222 20:45:50.148131    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:50.148137    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:50.148141    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:50.148147    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:50.148151    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:50 GMT
	I0222 20:45:50.148210    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:50.148386    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:50.641506    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:50.641526    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:50.641538    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:50.641548    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:50.645827    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:50.645841    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:50.645847    8582 round_trippers.go:580]     Audit-Id: b6bf9ea6-19ea-4be1-b27f-10a22785fc7e
	I0222 20:45:50.645852    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:50.645857    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:50.645863    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:50.645872    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:50.645877    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:50 GMT
	I0222 20:45:50.645969    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:50.646255    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:50.646261    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:50.646268    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:50.646274    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:50.648172    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:45:50.648184    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:50.648191    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:50.648196    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:50 GMT
	I0222 20:45:50.648201    8582 round_trippers.go:580]     Audit-Id: 7968df05-1bf0-4cb4-92c7-54c41245506b
	I0222 20:45:50.648206    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:50.648212    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:50.648217    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:50.648278    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:51.139979    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:51.139992    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:51.140000    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:51.140005    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:51.142879    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:51.142890    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:51.142896    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:51.142901    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:51.142906    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:51.142911    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:51 GMT
	I0222 20:45:51.142916    8582 round_trippers.go:580]     Audit-Id: 21cdef4b-1a9a-44c3-b3ff-976fecfa1633
	I0222 20:45:51.142921    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:51.142988    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:51.143256    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:51.143262    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:51.143267    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:51.143273    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:51.145289    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:51.145300    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:51.145306    8582 round_trippers.go:580]     Audit-Id: 1b62ad60-f08e-4d4d-ba39-02538b179cff
	I0222 20:45:51.145312    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:51.145318    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:51.145323    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:51.145328    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:51.145332    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:51 GMT
	I0222 20:45:51.145386    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:51.640521    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:51.640542    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:51.640554    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:51.640569    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:51.644804    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:45:51.644819    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:51.644825    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:51.644834    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:51 GMT
	I0222 20:45:51.644839    8582 round_trippers.go:580]     Audit-Id: cf595fd9-9d46-4a6b-947d-288fc8a55947
	I0222 20:45:51.644843    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:51.644848    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:51.644852    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:51.644913    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:51.645206    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:51.645212    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:51.645218    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:51.645224    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:51.647693    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:51.647703    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:51.647709    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:51.647713    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:51.647718    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:51.647724    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:51.647729    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:51 GMT
	I0222 20:45:51.647734    8582 round_trippers.go:580]     Audit-Id: f104b9b8-5073-4512-b09b-4cf613e59bbc
	I0222 20:45:51.647792    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:52.140140    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:52.140153    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:52.140159    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:52.140164    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:52.143150    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:52.143164    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:52.143173    8582 round_trippers.go:580]     Audit-Id: 1072dd5e-a5bc-4249-9931-3948bd97a535
	I0222 20:45:52.143179    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:52.143184    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:52.143191    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:52.143198    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:52.143205    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:52 GMT
	I0222 20:45:52.143442    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:52.143721    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:52.143727    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:52.143733    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:52.143738    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:52.146009    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:52.146020    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:52.146028    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:52 GMT
	I0222 20:45:52.146035    8582 round_trippers.go:580]     Audit-Id: 79e6e7c7-8e43-406f-a996-d5eb26e374a0
	I0222 20:45:52.146041    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:52.146047    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:52.146052    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:52.146057    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:52.146126    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:52.641154    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:52.641166    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:52.641173    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:52.641178    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:52.643969    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:52.643979    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:52.643985    8582 round_trippers.go:580]     Audit-Id: 487e4bd3-7c15-4850-8ca9-410deef08a9f
	I0222 20:45:52.643990    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:52.643995    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:52.644000    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:52.644005    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:52.644010    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:52 GMT
	I0222 20:45:52.644231    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:52.644522    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:52.644528    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:52.644534    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:52.644540    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:52.646899    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:52.646909    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:52.646914    8582 round_trippers.go:580]     Audit-Id: 3f463525-4185-48d7-b980-519a6a2c4e42
	I0222 20:45:52.646919    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:52.646924    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:52.646929    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:52.646934    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:52.646940    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:52 GMT
	I0222 20:45:52.647005    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:52.647291    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:53.140115    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:53.140128    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:53.140135    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:53.140143    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:53.143063    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:53.143076    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:53.143082    8582 round_trippers.go:580]     Audit-Id: 6bf7e802-cd5d-4db2-ab6b-c20d33ff04e0
	I0222 20:45:53.143094    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:53.143099    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:53.143104    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:53.143109    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:53.143114    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:53 GMT
	I0222 20:45:53.143183    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:53.143518    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:53.143526    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:53.143532    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:53.143537    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:53.146495    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:53.146506    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:53.146512    8582 round_trippers.go:580]     Audit-Id: ec00560f-c828-4cb4-bb4e-864dba4ce460
	I0222 20:45:53.146517    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:53.146530    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:53.146536    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:53.146541    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:53.146546    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:53 GMT
	I0222 20:45:53.146670    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"307","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:53.639965    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:53.639984    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:53.639993    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:53.640001    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:53.642894    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:53.642909    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:53.642923    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:53 GMT
	I0222 20:45:53.642932    8582 round_trippers.go:580]     Audit-Id: cf9ae1c0-10f3-4598-a024-b7a098024171
	I0222 20:45:53.642939    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:53.642947    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:53.642958    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:53.642982    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:53.643718    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:53.644206    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:53.644213    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:53.644220    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:53.644226    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:53.646715    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:53.646729    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:53.646736    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:53.646743    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:53.646751    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:53 GMT
	I0222 20:45:53.646763    8582 round_trippers.go:580]     Audit-Id: 09bc7ec7-e54c-4c2e-8780-9cba4be468ac
	I0222 20:45:53.646771    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:53.646777    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:53.646976    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:54.139974    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:54.140011    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:54.140070    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:54.140079    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:54.143089    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:54.143102    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:54.143108    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:54.143113    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:54.143118    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:54 GMT
	I0222 20:45:54.143123    8582 round_trippers.go:580]     Audit-Id: cb56166b-e6f5-4bb0-85e5-839ba3dfc7ca
	I0222 20:45:54.143128    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:54.143135    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:54.143198    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:54.143511    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:54.143518    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:54.143524    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:54.143530    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:54.146122    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:54.146138    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:54.146146    8582 round_trippers.go:580]     Audit-Id: ea24488e-26e8-4eb1-9dd6-eb19e41ef545
	I0222 20:45:54.146153    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:54.146175    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:54.146188    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:54.146199    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:54.146205    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:54 GMT
	I0222 20:45:54.146319    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:54.639923    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:54.639939    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:54.639945    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:54.639952    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:54.643433    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:54.643445    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:54.643451    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:54 GMT
	I0222 20:45:54.643461    8582 round_trippers.go:580]     Audit-Id: 9fd5220f-5b7f-476a-91a8-69ca81236f30
	I0222 20:45:54.643467    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:54.643472    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:54.643477    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:54.643482    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:54.643552    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:54.643886    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:54.643894    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:54.643902    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:54.643910    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:54.646104    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:54.646117    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:54.646123    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:54.646139    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:54.646147    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:54 GMT
	I0222 20:45:54.646155    8582 round_trippers.go:580]     Audit-Id: c9868175-a95b-41b4-8e80-a4d1e58a7f18
	I0222 20:45:54.646163    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:54.646203    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:54.646411    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:55.140021    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:55.140035    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:55.140043    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:55.140048    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:55.142834    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:55.142850    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:55.142857    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:55.142863    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:55 GMT
	I0222 20:45:55.142870    8582 round_trippers.go:580]     Audit-Id: 1211f572-a9be-4a51-a55c-a4ba504587a1
	I0222 20:45:55.142877    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:55.142889    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:55.142900    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:55.143034    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:55.143365    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:55.143372    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:55.143378    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:55.143386    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:55.146228    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:55.146239    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:55.146246    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:55.146251    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:55 GMT
	I0222 20:45:55.146255    8582 round_trippers.go:580]     Audit-Id: 36b34839-3ee1-47cc-9b9c-ab7a03fa563a
	I0222 20:45:55.146261    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:55.146265    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:55.146271    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:55.146362    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:55.146560    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:55.640075    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:55.640088    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:55.640098    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:55.640104    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:55.643028    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:55.643055    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:55.643078    8582 round_trippers.go:580]     Audit-Id: 25244bc4-a078-4bab-8d55-e11eb15cfaf0
	I0222 20:45:55.643092    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:55.643104    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:55.643119    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:55.643130    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:55.643138    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:55 GMT
	I0222 20:45:55.643220    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:55.643533    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:55.643540    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:55.643546    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:55.643551    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:55.645884    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:55.645893    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:55.645899    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:55.645904    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:55 GMT
	I0222 20:45:55.645909    8582 round_trippers.go:580]     Audit-Id: 36c0a7de-850c-4597-9f67-8c0ba2527d42
	I0222 20:45:55.645918    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:55.645923    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:55.645928    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:55.645986    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:56.139865    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:56.139881    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:56.139890    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:56.139895    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:56.143181    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:56.143195    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:56.143201    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:56.143206    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:56 GMT
	I0222 20:45:56.143211    8582 round_trippers.go:580]     Audit-Id: 435d589e-7201-400d-b2f0-bdd7f57246b7
	I0222 20:45:56.143235    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:56.143241    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:56.143245    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:56.143312    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:56.143603    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:56.143610    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:56.143616    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:56.143621    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:56.146004    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:56.146023    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:56.146032    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:56.146042    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:56.146050    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:56 GMT
	I0222 20:45:56.146059    8582 round_trippers.go:580]     Audit-Id: 38046551-3dc8-4ac7-8c3a-9ae71e543ca0
	I0222 20:45:56.146067    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:56.146076    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:56.146201    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:56.639923    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:56.639938    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:56.639946    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:56.639954    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:56.642916    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:56.642930    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:56.642935    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:56.642940    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:56.642945    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:56.642950    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:56 GMT
	I0222 20:45:56.642957    8582 round_trippers.go:580]     Audit-Id: c8fa652c-eb98-4324-99ab-9af2ea295340
	I0222 20:45:56.642965    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:56.643065    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:56.643382    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:56.643389    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:56.643396    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:56.643401    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:56.645816    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:56.645827    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:56.645834    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:56 GMT
	I0222 20:45:56.645839    8582 round_trippers.go:580]     Audit-Id: f0c8b616-c1c7-430e-be79-c5270a63325b
	I0222 20:45:56.645844    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:56.645849    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:56.645854    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:56.645859    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:56.645940    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:57.139921    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:57.139940    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:57.139949    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:57.139958    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:57.142895    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:57.142910    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:57.142918    8582 round_trippers.go:580]     Audit-Id: 3cf66bfd-9dc3-4834-9056-ec38cc143b98
	I0222 20:45:57.142926    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:57.142933    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:57.142940    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:57.142948    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:57.142953    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:57 GMT
	I0222 20:45:57.143028    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:57.143406    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:57.143414    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:57.143421    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:57.143432    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:57.146610    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:57.146621    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:57.146627    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:57.146633    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:57 GMT
	I0222 20:45:57.146638    8582 round_trippers.go:580]     Audit-Id: 2174f350-326d-4e0a-aa37-bcfb028be85a
	I0222 20:45:57.146643    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:57.146648    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:57.146653    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:57.146708    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:57.146890    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:57.640088    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:57.640102    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:57.640109    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:57.640114    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:57.642850    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:57.642865    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:57.642871    8582 round_trippers.go:580]     Audit-Id: c1c0801a-d9e1-40d2-a43c-0d936142c7c7
	I0222 20:45:57.642876    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:57.642881    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:57.642886    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:57.642891    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:57.642896    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:57 GMT
	I0222 20:45:57.642955    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:57.643260    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:57.643268    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:57.643273    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:57.643279    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:57.645570    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:57.645583    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:57.645589    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:57.645597    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:57.645610    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:57.645619    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:57 GMT
	I0222 20:45:57.645628    8582 round_trippers.go:580]     Audit-Id: 772991df-41aa-427f-9449-91c195b727d9
	I0222 20:45:57.645636    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:57.646116    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:58.140078    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:58.140092    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:58.140099    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:58.140104    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:58.143093    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:58.143116    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:58.143128    8582 round_trippers.go:580]     Audit-Id: b06beddb-37f6-4c05-bd55-19c244bdec48
	I0222 20:45:58.143137    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:58.143143    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:58.143147    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:58.143155    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:58.143163    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:58 GMT
	I0222 20:45:58.143242    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:58.143539    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:58.143546    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:58.143552    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:58.143559    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:58.146147    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:58.146157    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:58.146163    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:58.146168    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:58.146188    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:58 GMT
	I0222 20:45:58.146192    8582 round_trippers.go:580]     Audit-Id: 799d5e65-61e5-4cfe-87bf-ec1b954b7be1
	I0222 20:45:58.146220    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:58.146228    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:58.146329    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:58.640268    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:58.640285    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:58.640293    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:58.640299    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:58.643382    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:58.643395    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:58.643401    8582 round_trippers.go:580]     Audit-Id: 85116caf-f691-48e2-8d30-a62e10ccf2d2
	I0222 20:45:58.643405    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:58.643410    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:58.643416    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:58.643423    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:58.643433    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:58 GMT
	I0222 20:45:58.643512    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:58.643871    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:58.643878    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:58.643885    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:58.643890    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:58.646465    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:58.646481    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:58.646490    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:58.646498    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:58 GMT
	I0222 20:45:58.646506    8582 round_trippers.go:580]     Audit-Id: 1c70a520-2c51-42cb-b0b6-33e4480275a1
	I0222 20:45:58.646512    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:58.646517    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:58.646524    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:58.646811    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:59.139799    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:59.139815    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:59.139822    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:59.139828    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:59.142962    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:59.142981    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:59.142991    8582 round_trippers.go:580]     Audit-Id: 2ced3e19-9b46-4edd-aa75-768063b86b69
	I0222 20:45:59.142999    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:59.143007    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:59.143013    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:59.143025    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:59.143034    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:59 GMT
	I0222 20:45:59.143113    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:59.143469    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:59.143477    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:59.143486    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:59.143494    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:59.146903    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:59.146920    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:59.146931    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:59 GMT
	I0222 20:45:59.146941    8582 round_trippers.go:580]     Audit-Id: 542d2864-26dd-461d-bf54-321063cd896e
	I0222 20:45:59.146954    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:59.146968    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:59.146982    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:59.146993    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:59.147074    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:45:59.147282    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:45:59.639875    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:45:59.639888    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:59.639895    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:59.639900    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:59.642671    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:45:59.642684    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:59.642690    8582 round_trippers.go:580]     Audit-Id: 2cb6fd90-aba9-42ac-ba1c-d221c9c8e259
	I0222 20:45:59.642697    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:59.642704    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:59.642711    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:59.642718    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:59.642725    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:59 GMT
	I0222 20:45:59.642952    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:45:59.643346    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:45:59.643354    8582 round_trippers.go:469] Request Headers:
	I0222 20:45:59.643367    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:45:59.643375    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:45:59.646445    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:45:59.646459    8582 round_trippers.go:577] Response Headers:
	I0222 20:45:59.646467    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:45:59.646474    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:45:59 GMT
	I0222 20:45:59.646482    8582 round_trippers.go:580]     Audit-Id: c57b85f4-368f-469e-b6fd-f0e4efd0c942
	I0222 20:45:59.646489    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:45:59.646496    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:45:59.646503    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:45:59.646592    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:00.140030    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:00.140046    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:00.140053    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:00.140058    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:00.143022    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:00.143033    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:00.143039    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:00.143045    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:00.143058    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:00 GMT
	I0222 20:46:00.143071    8582 round_trippers.go:580]     Audit-Id: 08672f7a-b6eb-4bf4-a938-b407b84353c8
	I0222 20:46:00.143077    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:00.143082    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:00.143169    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:46:00.143467    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:00.143474    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:00.143480    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:00.143485    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:00.146328    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:00.146341    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:00.146346    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:00.146352    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:00.146357    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:00.146361    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:00 GMT
	I0222 20:46:00.146369    8582 round_trippers.go:580]     Audit-Id: 2c483815-1256-4385-9c3e-e63648997f6e
	I0222 20:46:00.146374    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:00.146445    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:00.639989    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:00.640002    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:00.640008    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:00.640014    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:00.643297    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:00.643324    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:00.643350    8582 round_trippers.go:580]     Audit-Id: 1641eec4-c118-418a-b4c4-159e03fff41e
	I0222 20:46:00.643355    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:00.643360    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:00.643380    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:00.643384    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:00.643404    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:00 GMT
	I0222 20:46:00.643470    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:46:00.643779    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:00.643785    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:00.643790    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:00.643804    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:00.646032    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:00.646041    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:00.646047    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:00.646051    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:00.646057    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:00.646062    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:00.646067    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:00 GMT
	I0222 20:46:00.646073    8582 round_trippers.go:580]     Audit-Id: 89744106-5d95-4394-bd05-23d88939f863
	I0222 20:46:00.646130    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.141682    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:01.141707    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.141760    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.141776    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.146335    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:46:01.146354    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.146360    8582 round_trippers.go:580]     Audit-Id: c7f5154d-05d0-455f-8415-8152af6bbeea
	I0222 20:46:01.146366    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.146371    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.146376    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.146381    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.146386    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.146446    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"391","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0222 20:46:01.146739    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.146746    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.146754    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.146764    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.149170    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.149179    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.149185    8582 round_trippers.go:580]     Audit-Id: 6a0e6f92-b743-4628-92ec-34cda14d2195
	I0222 20:46:01.149190    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.149196    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.149202    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.149207    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.149211    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.149261    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.149433    8582 pod_ready.go:102] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"False"
	I0222 20:46:01.639921    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:01.639936    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.639945    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.639952    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.643195    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:01.643207    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.643212    8582 round_trippers.go:580]     Audit-Id: be74f7da-9a56-43f7-abd5-8953c4c3e7e4
	I0222 20:46:01.643217    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.643221    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.643226    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.643231    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.643236    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.643297    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0222 20:46:01.643571    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.643577    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.643583    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.643588    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.645797    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.645807    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.645812    8582 round_trippers.go:580]     Audit-Id: ccee74ee-d1d3-4992-910e-5344d050eda6
	I0222 20:46:01.645818    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.645823    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.645828    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.645834    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.645838    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.645891    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.646072    8582 pod_ready.go:92] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.646082    8582 pod_ready.go:81] duration metric: took 15.515560245s waiting for pod "coredns-787d4945fb-48v9r" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.646101    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-j4pt7" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.646135    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-j4pt7
	I0222 20:46:01.646141    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.646149    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.646155    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.647960    8582 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0222 20:46:01.647969    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.647975    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.647980    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.647986    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.647991    8582 round_trippers.go:580]     Content-Length: 216
	I0222 20:46:01.647997    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.648002    8582 round_trippers.go:580]     Audit-Id: 9a9111b3-2786-486c-8ea9-1285ebd6f435
	I0222 20:46:01.648007    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.648018    8582 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-j4pt7\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-j4pt7","kind":"pods"},"code":404}
	I0222 20:46:01.648140    8582 pod_ready.go:97] error getting pod "coredns-787d4945fb-j4pt7" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-j4pt7" not found
	I0222 20:46:01.648147    8582 pod_ready.go:81] duration metric: took 2.039438ms waiting for pod "coredns-787d4945fb-j4pt7" in "kube-system" namespace to be "Ready" ...
	E0222 20:46:01.648153    8582 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-j4pt7" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-j4pt7" not found
	I0222 20:46:01.648158    8582 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.648191    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/etcd-multinode-216000
	I0222 20:46:01.648195    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.648202    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.648208    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.650179    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:01.650189    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.650195    8582 round_trippers.go:580]     Audit-Id: 957f7af0-2e7f-4eb9-93b7-2603fff7327b
	I0222 20:46:01.650200    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.650205    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.650210    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.650215    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.650220    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.650274    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-216000","namespace":"kube-system","uid":"c2b06896-f123-48bd-8603-0d7493488f5c","resourceVersion":"389","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2d051eb8eb3728481071a1fb944f8fb9","kubernetes.io/config.mirror":"2d051eb8eb3728481071a1fb944f8fb9","kubernetes.io/config.seen":"2023-02-23T04:45:32.257428627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0222 20:46:01.650488    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.650494    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.650500    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.650505    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.652601    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.652611    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.652616    8582 round_trippers.go:580]     Audit-Id: 3aee6e68-170a-4a56-957c-a1ad67425c49
	I0222 20:46:01.652624    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.652629    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.652634    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.652640    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.652645    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.652697    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.652858    8582 pod_ready.go:92] pod "etcd-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.652863    8582 pod_ready.go:81] duration metric: took 4.701382ms waiting for pod "etcd-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.652871    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.652895    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-216000
	I0222 20:46:01.652899    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.652905    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.652910    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.655186    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.655194    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.655200    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.655205    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.655210    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.655217    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.655223    8582 round_trippers.go:580]     Audit-Id: e0883865-49a8-4840-ac42-9b94db300e58
	I0222 20:46:01.655227    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.655288    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-216000","namespace":"kube-system","uid":"a28861be-afed-4463-a3c0-e438a5122dc8","resourceVersion":"276","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3327d28d34b6df60d7e253c5892d1f22","kubernetes.io/config.mirror":"3327d28d34b6df60d7e253c5892d1f22","kubernetes.io/config.seen":"2023-02-23T04:45:32.257429393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0222 20:46:01.655541    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.655547    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.655552    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.655559    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.657527    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:01.657536    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.657541    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.657546    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.657551    8582 round_trippers.go:580]     Audit-Id: 82a6e392-e7b5-4d15-bb68-f62e42301358
	I0222 20:46:01.657556    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.657561    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.657566    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.657611    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.657784    8582 pod_ready.go:92] pod "kube-apiserver-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.657792    8582 pod_ready.go:81] duration metric: took 4.913891ms waiting for pod "kube-apiserver-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.657797    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.657823    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-216000
	I0222 20:46:01.657828    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.657833    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.657839    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.659962    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.659971    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.659976    8582 round_trippers.go:580]     Audit-Id: 20530d6e-999d-4b83-9fa1-08eeb6484a0e
	I0222 20:46:01.659981    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.659987    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.659991    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.659997    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.660002    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.660083    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-216000","namespace":"kube-system","uid":"a851a311-37aa-46d5-9152-a95acbbc88ec","resourceVersion":"272","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e1371af7f33022153b0d8ba7783d4fc9","kubernetes.io/config.mirror":"e1371af7f33022153b0d8ba7783d4fc9","kubernetes.io/config.seen":"2023-02-23T04:45:32.257424246Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0222 20:46:01.660320    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.660325    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.660331    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.660338    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.662376    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.662385    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.662391    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.662396    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.662402    8582 round_trippers.go:580]     Audit-Id: 6595d8d4-0ff8-4b38-ad13-d168e2dcb100
	I0222 20:46:01.662407    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.662412    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.662417    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.662459    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.662618    8582 pod_ready.go:92] pod "kube-controller-manager-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.662624    8582 pod_ready.go:81] duration metric: took 4.821724ms waiting for pod "kube-controller-manager-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.662629    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgxrw" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.662659    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-fgxrw
	I0222 20:46:01.662664    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.662669    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.662675    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.664591    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:01.664601    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.664607    8582 round_trippers.go:580]     Audit-Id: 84e8c5b6-a703-48fe-b328-61e5e74b1a63
	I0222 20:46:01.664612    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.664618    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.664623    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.664627    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.664632    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.664687    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fgxrw","generateName":"kube-proxy-","namespace":"kube-system","uid":"7402cf62-2944-469b-9c38-0447377d4579","resourceVersion":"393","creationTimestamp":"2023-02-23T04:45:44Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0222 20:46:01.840015    8582 request.go:622] Waited for 175.056702ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.840043    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:01.840047    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:01.840053    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:01.840060    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:01.842817    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:01.842827    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:01.842833    8582 round_trippers.go:580]     Audit-Id: 5c6904ba-5404-4f47-8f45-fa0ec8a99bee
	I0222 20:46:01.842838    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:01.842843    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:01.842848    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:01.842853    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:01.842858    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:01 GMT
	I0222 20:46:01.842917    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:01.843098    8582 pod_ready.go:92] pod "kube-proxy-fgxrw" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:01.843104    8582 pod_ready.go:81] duration metric: took 180.472895ms waiting for pod "kube-proxy-fgxrw" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:01.843110    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:02.041988    8582 request.go:622] Waited for 198.823853ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-216000
	I0222 20:46:02.042146    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-216000
	I0222 20:46:02.042158    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.042169    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.042182    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.047502    8582 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0222 20:46:02.047521    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.047527    8582 round_trippers.go:580]     Audit-Id: 0843bfe4-22ad-4594-a36c-ba20edc80e7c
	I0222 20:46:02.047532    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.047536    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.047541    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.047546    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.047551    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.047614    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-216000","namespace":"kube-system","uid":"a77cec17-0ffa-4b1b-91b0-aa6367fc7848","resourceVersion":"270","creationTimestamp":"2023-02-23T04:45:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e812827214b9960209c3ba4dcd668c3","kubernetes.io/config.mirror":"0e812827214b9960209c3ba4dcd668c3","kubernetes.io/config.seen":"2023-02-23T04:45:22.142158982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0222 20:46:02.241177    8582 request.go:622] Waited for 193.17144ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:02.241228    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:02.241235    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.241247    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.241257    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.245229    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:02.245244    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.245252    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.245259    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.245266    8582 round_trippers.go:580]     Audit-Id: ad9e33d1-828e-440f-b4f6-e72f827fe347
	I0222 20:46:02.245273    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.245279    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.245286    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.245385    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0222 20:46:02.245597    8582 pod_ready.go:92] pod "kube-scheduler-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:02.245603    8582 pod_ready.go:81] duration metric: took 402.491987ms waiting for pod "kube-scheduler-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:02.245610    8582 pod_ready.go:38] duration metric: took 16.125498934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 20:46:02.245625    8582 api_server.go:51] waiting for apiserver process to appear ...
	I0222 20:46:02.245687    8582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 20:46:02.255118    8582 command_runner.go:130] > 1920
	I0222 20:46:02.255764    8582 api_server.go:71] duration metric: took 16.816800107s to wait for apiserver process to appear ...
	I0222 20:46:02.255774    8582 api_server.go:87] waiting for apiserver healthz status ...
	I0222 20:46:02.255785    8582 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51085/healthz ...
	I0222 20:46:02.261002    8582 api_server.go:278] https://127.0.0.1:51085/healthz returned 200:
	ok
	I0222 20:46:02.261035    8582 round_trippers.go:463] GET https://127.0.0.1:51085/version
	I0222 20:46:02.261039    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.261047    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.261053    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.262305    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:02.262314    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.262319    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.262325    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.262333    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.262338    8582 round_trippers.go:580]     Content-Length: 263
	I0222 20:46:02.262343    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.262347    8582 round_trippers.go:580]     Audit-Id: 940d1402-f44f-4fea-89fc-74b1769b4bd3
	I0222 20:46:02.262353    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.262362    8582 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0222 20:46:02.262410    8582 api_server.go:140] control plane version: v1.26.1
	I0222 20:46:02.262416    8582 api_server.go:130] duration metric: took 6.639194ms to wait for apiserver health ...
	I0222 20:46:02.262420    8582 system_pods.go:43] waiting for kube-system pods to appear ...
	I0222 20:46:02.440157    8582 request.go:622] Waited for 177.694516ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:02.440199    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:02.440209    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.440277    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.440286    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.444553    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:46:02.444566    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.444576    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.444581    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.444586    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.444591    8582 round_trippers.go:580]     Audit-Id: 9502ef66-3432-4eb2-9ac0-475cbd92a774
	I0222 20:46:02.444597    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.444601    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.445075    8582 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0222 20:46:02.446384    8582 system_pods.go:59] 8 kube-system pods found
	I0222 20:46:02.446394    8582 system_pods.go:61] "coredns-787d4945fb-48v9r" [e6f820e8-bc10-4500-8a19-17a16c982d46] Running
	I0222 20:46:02.446398    8582 system_pods.go:61] "etcd-multinode-216000" [c2b06896-f123-48bd-8603-0d7493488f5c] Running
	I0222 20:46:02.446402    8582 system_pods.go:61] "kindnet-m7gzm" [16c4431b-9696-442c-bcd2-626629a1cb64] Running
	I0222 20:46:02.446406    8582 system_pods.go:61] "kube-apiserver-multinode-216000" [a28861be-afed-4463-a3c0-e438a5122dc8] Running
	I0222 20:46:02.446412    8582 system_pods.go:61] "kube-controller-manager-multinode-216000" [a851a311-37aa-46d5-9152-a95acbbc88ec] Running
	I0222 20:46:02.446416    8582 system_pods.go:61] "kube-proxy-fgxrw" [7402cf62-2944-469b-9c38-0447377d4579] Running
	I0222 20:46:02.446421    8582 system_pods.go:61] "kube-scheduler-multinode-216000" [a77cec17-0ffa-4b1b-91b0-aa6367fc7848] Running
	I0222 20:46:02.446424    8582 system_pods.go:61] "storage-provisioner" [9540d868-f1fc-476f-8ebd-f4f5ac9bebac] Running
	I0222 20:46:02.446428    8582 system_pods.go:74] duration metric: took 184.006753ms to wait for pod list to return data ...
	I0222 20:46:02.446437    8582 default_sa.go:34] waiting for default service account to be created ...
	I0222 20:46:02.640074    8582 request.go:622] Waited for 193.587029ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/default/serviceaccounts
	I0222 20:46:02.640169    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/default/serviceaccounts
	I0222 20:46:02.640180    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.640192    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.640204    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.643643    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:02.643653    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.643658    8582 round_trippers.go:580]     Audit-Id: ebe51535-faea-48b2-8c93-8c35a6c16e5f
	I0222 20:46:02.643663    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.643668    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.643673    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.643679    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.643689    8582 round_trippers.go:580]     Content-Length: 261
	I0222 20:46:02.643694    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.643707    8582 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"53d67d08-9409-45ee-aadf-89048d75e48e","resourceVersion":"304","creationTimestamp":"2023-02-23T04:45:44Z"}}]}
	I0222 20:46:02.643813    8582 default_sa.go:45] found service account: "default"
	I0222 20:46:02.643819    8582 default_sa.go:55] duration metric: took 197.380215ms for default service account to be created ...
	I0222 20:46:02.643828    8582 system_pods.go:116] waiting for k8s-apps to be running ...
	I0222 20:46:02.841986    8582 request.go:622] Waited for 198.120655ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:02.842135    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:02.842147    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:02.842160    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:02.842171    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:02.847486    8582 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0222 20:46:02.847502    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:02.847508    8582 round_trippers.go:580]     Audit-Id: b96b3b16-8ba5-4a64-9795-9d9b3d0cd8f8
	I0222 20:46:02.847513    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:02.847520    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:02.847527    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:02.847538    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:02.847544    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:02 GMT
	I0222 20:46:02.848654    8582 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0222 20:46:02.849926    8582 system_pods.go:86] 8 kube-system pods found
	I0222 20:46:02.849935    8582 system_pods.go:89] "coredns-787d4945fb-48v9r" [e6f820e8-bc10-4500-8a19-17a16c982d46] Running
	I0222 20:46:02.849941    8582 system_pods.go:89] "etcd-multinode-216000" [c2b06896-f123-48bd-8603-0d7493488f5c] Running
	I0222 20:46:02.849945    8582 system_pods.go:89] "kindnet-m7gzm" [16c4431b-9696-442c-bcd2-626629a1cb64] Running
	I0222 20:46:02.849949    8582 system_pods.go:89] "kube-apiserver-multinode-216000" [a28861be-afed-4463-a3c0-e438a5122dc8] Running
	I0222 20:46:02.849953    8582 system_pods.go:89] "kube-controller-manager-multinode-216000" [a851a311-37aa-46d5-9152-a95acbbc88ec] Running
	I0222 20:46:02.849957    8582 system_pods.go:89] "kube-proxy-fgxrw" [7402cf62-2944-469b-9c38-0447377d4579] Running
	I0222 20:46:02.849962    8582 system_pods.go:89] "kube-scheduler-multinode-216000" [a77cec17-0ffa-4b1b-91b0-aa6367fc7848] Running
	I0222 20:46:02.849966    8582 system_pods.go:89] "storage-provisioner" [9540d868-f1fc-476f-8ebd-f4f5ac9bebac] Running
	I0222 20:46:02.849970    8582 system_pods.go:126] duration metric: took 206.141147ms to wait for k8s-apps to be running ...
	I0222 20:46:02.849979    8582 system_svc.go:44] waiting for kubelet service to be running ....
	I0222 20:46:02.850037    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:46:02.859842    8582 system_svc.go:56] duration metric: took 9.86221ms WaitForService to wait for kubelet.
	I0222 20:46:02.859855    8582 kubeadm.go:578] duration metric: took 17.420896974s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0222 20:46:02.859869    8582 node_conditions.go:102] verifying NodePressure condition ...
	I0222 20:46:03.039987    8582 request.go:622] Waited for 180.026741ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes
	I0222 20:46:03.040032    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes
	I0222 20:46:03.040036    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:03.040043    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:03.040049    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:03.042481    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:03.042492    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:03.042497    8582 round_trippers.go:580]     Audit-Id: f96fdf85-a7cd-4e39-9dd6-0fb7d3be5def
	I0222 20:46:03.042502    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:03.042508    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:03.042517    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:03.042522    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:03.042527    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:03 GMT
	I0222 20:46:03.042585    8582 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"407","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5005 chars]
	I0222 20:46:03.042807    8582 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 20:46:03.042819    8582 node_conditions.go:123] node cpu capacity is 6
	I0222 20:46:03.042829    8582 node_conditions.go:105] duration metric: took 182.957935ms to run NodePressure ...
	I0222 20:46:03.042837    8582 start.go:228] waiting for startup goroutines ...
	I0222 20:46:03.042843    8582 start.go:233] waiting for cluster config update ...
	I0222 20:46:03.042870    8582 start.go:242] writing updated cluster config ...
	I0222 20:46:03.063512    8582 out.go:177] 
	I0222 20:46:03.101815    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:46:03.101927    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:46:03.124498    8582 out.go:177] * Starting worker node multinode-216000-m02 in cluster multinode-216000
	I0222 20:46:03.167345    8582 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 20:46:03.188383    8582 out.go:177] * Pulling base image ...
	I0222 20:46:03.231238    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:46:03.231300    8582 cache.go:57] Caching tarball of preloaded images
	I0222 20:46:03.231301    8582 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 20:46:03.231472    8582 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 20:46:03.231489    8582 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0222 20:46:03.231597    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:46:03.288231    8582 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 20:46:03.288254    8582 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 20:46:03.288286    8582 cache.go:193] Successfully downloaded all kic artifacts
	I0222 20:46:03.288317    8582 start.go:364] acquiring machines lock for multinode-216000-m02: {Name:mk771672be864b661a9d3157699d8a2299fad1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 20:46:03.288470    8582 start.go:368] acquired machines lock for "multinode-216000-m02" in 142.417µs
	I0222 20:46:03.288496    8582 start.go:93] Provisioning new machine with config: &{Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0222 20:46:03.288583    8582 start.go:125] createHost starting for "m02" (driver="docker")
	I0222 20:46:03.310228    8582 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0222 20:46:03.310404    8582 start.go:159] libmachine.API.Create for "multinode-216000" (driver="docker")
	I0222 20:46:03.310437    8582 client.go:168] LocalClient.Create starting
	I0222 20:46:03.310589    8582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem
	I0222 20:46:03.310665    8582 main.go:141] libmachine: Decoding PEM data...
	I0222 20:46:03.310690    8582 main.go:141] libmachine: Parsing certificate...
	I0222 20:46:03.310783    8582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem
	I0222 20:46:03.310833    8582 main.go:141] libmachine: Decoding PEM data...
	I0222 20:46:03.310852    8582 main.go:141] libmachine: Parsing certificate...
	I0222 20:46:03.331516    8582 cli_runner.go:164] Run: docker network inspect multinode-216000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0222 20:46:03.388081    8582 network_create.go:76] Found existing network {name:multinode-216000 subnet:0xc0004de2d0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0222 20:46:03.388129    8582 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-216000-m02" container
	I0222 20:46:03.388257    8582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0222 20:46:03.447257    8582 cli_runner.go:164] Run: docker volume create multinode-216000-m02 --label name.minikube.sigs.k8s.io=multinode-216000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0222 20:46:03.503476    8582 oci.go:103] Successfully created a docker volume multinode-216000-m02
	I0222 20:46:03.503608    8582 cli_runner.go:164] Run: docker run --rm --name multinode-216000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-216000-m02 --entrypoint /usr/bin/test -v multinode-216000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0222 20:46:03.944305    8582 oci.go:107] Successfully prepared a docker volume multinode-216000-m02
	I0222 20:46:03.944350    8582 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:46:03.944361    8582 kic.go:190] Starting extracting preloaded images to volume ...
	I0222 20:46:03.944492    8582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-216000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0222 20:46:10.327136    8582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-216000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.382633829s)
	I0222 20:46:10.327159    8582 kic.go:199] duration metric: took 6.382869 seconds to extract preloaded images to volume
	I0222 20:46:10.327312    8582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0222 20:46:10.477375    8582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-216000-m02 --name multinode-216000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-216000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-216000-m02 --network multinode-216000 --ip 192.168.58.3 --volume multinode-216000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0222 20:46:10.841012    8582 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Running}}
	I0222 20:46:10.906969    8582 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Status}}
	I0222 20:46:10.974576    8582 cli_runner.go:164] Run: docker exec multinode-216000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0222 20:46:11.082421    8582 oci.go:144] the created container "multinode-216000-m02" has a running status.
	I0222 20:46:11.082546    8582 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa...
	I0222 20:46:11.166692    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0222 20:46:11.166759    8582 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0222 20:46:11.276581    8582 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Status}}
	I0222 20:46:11.339407    8582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0222 20:46:11.339428    8582 kic_runner.go:114] Args: [docker exec --privileged multinode-216000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0222 20:46:11.440252    8582 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Status}}
	I0222 20:46:11.501592    8582 machine.go:88] provisioning docker machine ...
	I0222 20:46:11.501622    8582 ubuntu.go:169] provisioning hostname "multinode-216000-m02"
	I0222 20:46:11.501720    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:11.584080    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:11.584484    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:11.584495    8582 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-216000-m02 && echo "multinode-216000-m02" | sudo tee /etc/hostname
	I0222 20:46:11.728290    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-216000-m02
	
	I0222 20:46:11.728380    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:11.788579    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:11.788947    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:11.788961    8582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-216000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-216000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-216000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 20:46:11.924694    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 20:46:11.924717    8582 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 20:46:11.924725    8582 ubuntu.go:177] setting up certificates
	I0222 20:46:11.924735    8582 provision.go:83] configureAuth start
	I0222 20:46:11.924826    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000-m02
	I0222 20:46:11.984151    8582 provision.go:138] copyHostCerts
	I0222 20:46:11.984200    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:46:11.984260    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 20:46:11.984266    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 20:46:11.984420    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 20:46:11.984601    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:46:11.984658    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 20:46:11.984664    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 20:46:11.984739    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 20:46:11.984880    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:46:11.984913    8582 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 20:46:11.984918    8582 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 20:46:11.984974    8582 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 20:46:11.985100    8582 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.multinode-216000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-216000-m02]
	I0222 20:46:12.504429    8582 provision.go:172] copyRemoteCerts
	I0222 20:46:12.504494    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 20:46:12.504546    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:12.565974    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:12.662253    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0222 20:46:12.662333    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 20:46:12.680393    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0222 20:46:12.680466    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0222 20:46:12.698324    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0222 20:46:12.698403    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0222 20:46:12.716052    8582 provision.go:86] duration metric: configureAuth took 791.317078ms
	I0222 20:46:12.716065    8582 ubuntu.go:193] setting minikube options for container-runtime
	I0222 20:46:12.716211    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:46:12.716281    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:12.775756    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:12.776116    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:12.776127    8582 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 20:46:12.909345    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 20:46:12.909358    8582 ubuntu.go:71] root file system type: overlay
	I0222 20:46:12.909459    8582 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 20:46:12.909536    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:12.968891    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:12.969246    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:12.969301    8582 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 20:46:13.113473    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 20:46:13.113576    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:13.174448    8582 main.go:141] libmachine: Using SSH client type: native
	I0222 20:46:13.174806    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51154 <nil> <nil>}
	I0222 20:46:13.174820    8582 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 20:46:13.805977    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 04:46:13.111929614 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0222 20:46:13.806000    8582 machine.go:91] provisioned docker machine in 2.304412699s
	I0222 20:46:13.806006    8582 client.go:171] LocalClient.Create took 10.495682981s
	I0222 20:46:13.806044    8582 start.go:167] duration metric: libmachine.API.Create for "multinode-216000" took 10.495761921s
	I0222 20:46:13.806050    8582 start.go:300] post-start starting for "multinode-216000-m02" (driver="docker")
	I0222 20:46:13.806055    8582 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 20:46:13.806143    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 20:46:13.806200    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:13.867642    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:13.961894    8582 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 20:46:13.966041    8582 command_runner.go:130] > NAME="Ubuntu"
	I0222 20:46:13.966055    8582 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0222 20:46:13.966060    8582 command_runner.go:130] > ID=ubuntu
	I0222 20:46:13.966064    8582 command_runner.go:130] > ID_LIKE=debian
	I0222 20:46:13.966071    8582 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0222 20:46:13.966076    8582 command_runner.go:130] > VERSION_ID="20.04"
	I0222 20:46:13.966081    8582 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0222 20:46:13.966085    8582 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0222 20:46:13.966090    8582 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0222 20:46:13.966102    8582 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0222 20:46:13.966106    8582 command_runner.go:130] > VERSION_CODENAME=focal
	I0222 20:46:13.966110    8582 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0222 20:46:13.966156    8582 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 20:46:13.966167    8582 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 20:46:13.966174    8582 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 20:46:13.966179    8582 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 20:46:13.966185    8582 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 20:46:13.966293    8582 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 20:46:13.966446    8582 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 20:46:13.966452    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /etc/ssl/certs/31332.pem
	I0222 20:46:13.966632    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 20:46:13.974052    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:46:13.992577    8582 start.go:303] post-start completed in 186.521047ms
	I0222 20:46:13.993099    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000-m02
	I0222 20:46:14.052397    8582 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/config.json ...
	I0222 20:46:14.052834    8582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 20:46:14.052898    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:14.113123    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:14.206681    8582 command_runner.go:130] > 11%!
	(MISSING)I0222 20:46:14.207077    8582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 20:46:14.211542    8582 command_runner.go:130] > 50G
	I0222 20:46:14.211853    8582 start.go:128] duration metric: createHost completed in 10.923387707s
	I0222 20:46:14.211863    8582 start.go:83] releasing machines lock for "multinode-216000-m02", held for 10.923510699s
	I0222 20:46:14.211944    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000-m02
	I0222 20:46:14.297048    8582 out.go:177] * Found network options:
	I0222 20:46:14.317960    8582 out.go:177]   - NO_PROXY=192.168.58.2
	W0222 20:46:14.339012    8582 proxy.go:119] fail to check proxy env: Error ip not in block
	W0222 20:46:14.339065    8582 proxy.go:119] fail to check proxy env: Error ip not in block
	I0222 20:46:14.339259    8582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 20:46:14.339260    8582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0222 20:46:14.339366    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:14.339385    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:46:14.410169    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:14.410170    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:46:14.557593    8582 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0222 20:46:14.557617    8582 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0222 20:46:14.557626    8582 command_runner.go:130] > Device: 100006h/1048582d	Inode: 393237      Links: 1
	I0222 20:46:14.557633    8582 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0222 20:46:14.557641    8582 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0222 20:46:14.557650    8582 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0222 20:46:14.557656    8582 command_runner.go:130] > Change: 2023-02-23 04:22:34.614629251 +0000
	I0222 20:46:14.557661    8582 command_runner.go:130] >  Birth: -
	I0222 20:46:14.557686    8582 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0222 20:46:14.557788    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 20:46:14.580740    8582 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 20:46:14.580837    8582 ssh_runner.go:195] Run: which cri-dockerd
	I0222 20:46:14.584881    8582 command_runner.go:130] > /usr/bin/cri-dockerd
	I0222 20:46:14.585102    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0222 20:46:14.592651    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0222 20:46:14.606220    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0222 20:46:14.621509    8582 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0222 20:46:14.621534    8582 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0222 20:46:14.621545    8582 start.go:485] detecting cgroup driver to use...
	I0222 20:46:14.621560    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:46:14.621657    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:46:14.634292    8582 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0222 20:46:14.634310    8582 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0222 20:46:14.635135    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0222 20:46:14.644120    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 20:46:14.653165    8582 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 20:46:14.653228    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 20:46:14.661973    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:46:14.671091    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 20:46:14.679844    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 20:46:14.688206    8582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 20:46:14.696454    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 20:46:14.705060    8582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 20:46:14.712051    8582 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0222 20:46:14.712704    8582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 20:46:14.719895    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:46:14.785817    8582 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 20:46:14.862698    8582 start.go:485] detecting cgroup driver to use...
	I0222 20:46:14.862720    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 20:46:14.862812    8582 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 20:46:14.874159    8582 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0222 20:46:14.874170    8582 command_runner.go:130] > [Unit]
	I0222 20:46:14.874178    8582 command_runner.go:130] > Description=Docker Application Container Engine
	I0222 20:46:14.874183    8582 command_runner.go:130] > Documentation=https://docs.docker.com
	I0222 20:46:14.874187    8582 command_runner.go:130] > BindsTo=containerd.service
	I0222 20:46:14.874192    8582 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0222 20:46:14.874198    8582 command_runner.go:130] > Wants=network-online.target
	I0222 20:46:14.874203    8582 command_runner.go:130] > Requires=docker.socket
	I0222 20:46:14.874206    8582 command_runner.go:130] > StartLimitBurst=3
	I0222 20:46:14.874217    8582 command_runner.go:130] > StartLimitIntervalSec=60
	I0222 20:46:14.874221    8582 command_runner.go:130] > [Service]
	I0222 20:46:14.874225    8582 command_runner.go:130] > Type=notify
	I0222 20:46:14.874229    8582 command_runner.go:130] > Restart=on-failure
	I0222 20:46:14.874234    8582 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0222 20:46:14.874240    8582 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0222 20:46:14.874254    8582 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0222 20:46:14.874260    8582 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0222 20:46:14.874266    8582 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0222 20:46:14.874272    8582 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0222 20:46:14.874277    8582 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0222 20:46:14.874286    8582 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0222 20:46:14.874298    8582 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0222 20:46:14.874304    8582 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0222 20:46:14.874307    8582 command_runner.go:130] > ExecStart=
	I0222 20:46:14.874320    8582 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0222 20:46:14.874325    8582 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0222 20:46:14.874330    8582 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0222 20:46:14.874336    8582 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0222 20:46:14.874339    8582 command_runner.go:130] > LimitNOFILE=infinity
	I0222 20:46:14.874344    8582 command_runner.go:130] > LimitNPROC=infinity
	I0222 20:46:14.874347    8582 command_runner.go:130] > LimitCORE=infinity
	I0222 20:46:14.874353    8582 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0222 20:46:14.874358    8582 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0222 20:46:14.874364    8582 command_runner.go:130] > TasksMax=infinity
	I0222 20:46:14.874368    8582 command_runner.go:130] > TimeoutStartSec=0
	I0222 20:46:14.874373    8582 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0222 20:46:14.874378    8582 command_runner.go:130] > Delegate=yes
	I0222 20:46:14.874387    8582 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0222 20:46:14.874390    8582 command_runner.go:130] > KillMode=process
	I0222 20:46:14.874394    8582 command_runner.go:130] > [Install]
	I0222 20:46:14.874398    8582 command_runner.go:130] > WantedBy=multi-user.target
	I0222 20:46:14.874408    8582 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 20:46:14.874468    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 20:46:14.885992    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 20:46:14.900791    8582 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0222 20:46:14.900803    8582 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0222 20:46:14.901561    8582 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 20:46:14.980725    8582 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 20:46:15.076788    8582 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 20:46:15.076820    8582 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 20:46:15.090814    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:46:15.182035    8582 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 20:46:15.421323    8582 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 20:46:15.497048    8582 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0222 20:46:15.497162    8582 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0222 20:46:15.572440    8582 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 20:46:15.640210    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 20:46:15.720205    8582 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0222 20:46:15.731696    8582 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0222 20:46:15.731788    8582 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0222 20:46:15.735925    8582 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0222 20:46:15.735935    8582 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0222 20:46:15.735942    8582 command_runner.go:130] > Device: 10001bh/1048603d	Inode: 206         Links: 1
	I0222 20:46:15.735949    8582 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0222 20:46:15.735955    8582 command_runner.go:130] > Access: 2023-02-23 04:46:15.727929414 +0000
	I0222 20:46:15.735959    8582 command_runner.go:130] > Modify: 2023-02-23 04:46:15.727929414 +0000
	I0222 20:46:15.735964    8582 command_runner.go:130] > Change: 2023-02-23 04:46:15.728929413 +0000
	I0222 20:46:15.735967    8582 command_runner.go:130] >  Birth: -
	I0222 20:46:15.735989    8582 start.go:553] Will wait 60s for crictl version
	I0222 20:46:15.736032    8582 ssh_runner.go:195] Run: which crictl
	I0222 20:46:15.739516    8582 command_runner.go:130] > /usr/bin/crictl
	I0222 20:46:15.739571    8582 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0222 20:46:15.832105    8582 command_runner.go:130] > Version:  0.1.0
	I0222 20:46:15.832122    8582 command_runner.go:130] > RuntimeName:  docker
	I0222 20:46:15.832128    8582 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0222 20:46:15.832135    8582 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0222 20:46:15.834551    8582 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0222 20:46:15.834633    8582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:46:15.859561    8582 command_runner.go:130] > 23.0.1
	I0222 20:46:15.861339    8582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 20:46:15.885579    8582 command_runner.go:130] > 23.0.1
	I0222 20:46:15.930657    8582 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0222 20:46:15.952562    8582 out.go:177]   - env NO_PROXY=192.168.58.2
	I0222 20:46:15.973605    8582 cli_runner.go:164] Run: docker exec -t multinode-216000-m02 dig +short host.docker.internal
	I0222 20:46:16.097800    8582 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 20:46:16.097914    8582 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 20:46:16.102508    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:46:16.112430    8582 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000 for IP: 192.168.58.3
	I0222 20:46:16.112457    8582 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:46:16.112701    8582 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 20:46:16.112776    8582 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 20:46:16.112787    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0222 20:46:16.112866    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0222 20:46:16.112891    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0222 20:46:16.112911    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0222 20:46:16.113041    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 20:46:16.113119    8582 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 20:46:16.113130    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 20:46:16.113181    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 20:46:16.113219    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 20:46:16.113289    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 20:46:16.113359    8582 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 20:46:16.113393    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.113432    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem -> /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.113450    8582 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.113837    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 20:46:16.131404    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 20:46:16.149340    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 20:46:16.166865    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 20:46:16.185817    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 20:46:16.203386    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 20:46:16.221611    8582 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 20:46:16.239710    8582 ssh_runner.go:195] Run: openssl version
	I0222 20:46:16.245071    8582 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0222 20:46:16.245411    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 20:46:16.253747    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.257729    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.257824    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.257870    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 20:46:16.262981    8582 command_runner.go:130] > b5213941
	I0222 20:46:16.263329    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 20:46:16.272049    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 20:46:16.280618    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.284725    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.284755    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.284801    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 20:46:16.290228    8582 command_runner.go:130] > 51391683
	I0222 20:46:16.290573    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 20:46:16.298810    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 20:46:16.307038    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.311062    8582 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.311138    8582 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.311184    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 20:46:16.316314    8582 command_runner.go:130] > 3ec20f2e
	I0222 20:46:16.316748    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 20:46:16.325540    8582 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 20:46:16.349447    8582 command_runner.go:130] > cgroupfs
	I0222 20:46:16.351401    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:46:16.351423    8582 cni.go:136] 2 nodes found, recommending kindnet
	I0222 20:46:16.351435    8582 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 20:46:16.351452    8582 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-216000 NodeName:multinode-216000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 20:46:16.351546    8582 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-216000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 20:46:16.351593    8582 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-216000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 20:46:16.351668    8582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0222 20:46:16.359184    8582 command_runner.go:130] > kubeadm
	I0222 20:46:16.359194    8582 command_runner.go:130] > kubectl
	I0222 20:46:16.359197    8582 command_runner.go:130] > kubelet
	I0222 20:46:16.359872    8582 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 20:46:16.359936    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0222 20:46:16.367960    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0222 20:46:16.381661    8582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 20:46:16.394894    8582 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0222 20:46:16.399028    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 20:46:16.409218    8582 host.go:66] Checking if "multinode-216000" exists ...
	I0222 20:46:16.409395    8582 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:46:16.409412    8582 start.go:301] JoinCluster: &{Name:multinode-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:46:16.409483    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0222 20:46:16.409535    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:46:16.469970    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:46:16.632987    8582 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token up5o8d.t4cvrsg5qdcp35bq --discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf 
	I0222 20:46:16.633040    8582 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0222 20:46:16.633070    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token up5o8d.t4cvrsg5qdcp35bq --discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-216000-m02"
	I0222 20:46:16.675254    8582 command_runner.go:130] > [preflight] Running pre-flight checks
	I0222 20:46:16.797713    8582 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0222 20:46:16.797746    8582 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0222 20:46:16.823782    8582 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 20:46:16.823797    8582 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 20:46:16.823802    8582 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0222 20:46:16.897914    8582 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0222 20:46:18.411749    8582 command_runner.go:130] > This node has joined the cluster:
	I0222 20:46:18.411767    8582 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0222 20:46:18.411775    8582 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0222 20:46:18.411782    8582 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0222 20:46:18.415575    8582 command_runner.go:130] ! W0223 04:46:16.674745    1233 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0222 20:46:18.415594    8582 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0222 20:46:18.415602    8582 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 20:46:18.415617    8582 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token up5o8d.t4cvrsg5qdcp35bq --discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-216000-m02": (1.782555328s)
	I0222 20:46:18.415636    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0222 20:46:18.568267    8582 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0222 20:46:18.568350    8582 start.go:303] JoinCluster complete in 2.15893712s
	I0222 20:46:18.568361    8582 cni.go:84] Creating CNI manager for ""
	I0222 20:46:18.568369    8582 cni.go:136] 2 nodes found, recommending kindnet
	I0222 20:46:18.568508    8582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0222 20:46:18.573989    8582 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0222 20:46:18.574003    8582 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0222 20:46:18.574014    8582 command_runner.go:130] > Device: a6h/166d	Inode: 267135      Links: 1
	I0222 20:46:18.574022    8582 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0222 20:46:18.574028    8582 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0222 20:46:18.574033    8582 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0222 20:46:18.574039    8582 command_runner.go:130] > Change: 2023-02-23 04:22:33.946629303 +0000
	I0222 20:46:18.574043    8582 command_runner.go:130] >  Birth: -
	I0222 20:46:18.574111    8582 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0222 20:46:18.574123    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0222 20:46:18.588101    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0222 20:46:18.782044    8582 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0222 20:46:18.784821    8582 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0222 20:46:18.786626    8582 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0222 20:46:18.794913    8582 command_runner.go:130] > daemonset.apps/kindnet configured
	I0222 20:46:18.802272    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:46:18.802522    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:46:18.802838    8582 round_trippers.go:463] GET https://127.0.0.1:51085/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0222 20:46:18.802845    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.802852    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.802860    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.805222    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.805232    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.805239    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.805246    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.805252    8582 round_trippers.go:580]     Content-Length: 291
	I0222 20:46:18.805257    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.805262    8582 round_trippers.go:580]     Audit-Id: d6b2b061-0c91-4ebe-a4b7-8c37a6dbbb48
	I0222 20:46:18.805267    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.805273    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.805287    8582 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1b86bd9d-8495-40cf-b9a1-acef7d79001d","resourceVersion":"426","creationTimestamp":"2023-02-23T04:45:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0222 20:46:18.805341    8582 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-216000" context rescaled to 1 replicas
	I0222 20:46:18.805356    8582 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0222 20:46:18.827722    8582 out.go:177] * Verifying Kubernetes components...
	I0222 20:46:18.868831    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:46:18.880767    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:46:18.941104    8582 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:46:18.941317    8582 kapi.go:59] client config for multinode-216000: &rest.Config{Host:"https://127.0.0.1:51085", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/multinode-216000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 20:46:18.941549    8582 node_ready.go:35] waiting up to 6m0s for node "multinode-216000-m02" to be "Ready" ...
	I0222 20:46:18.941594    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:18.941599    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.941605    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.941611    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.944225    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.944241    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.944249    8582 round_trippers.go:580]     Audit-Id: 0daf5ebe-a421-4659-82a4-5db257fa23df
	I0222 20:46:18.944256    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.944261    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.944267    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.944282    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.944287    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.944362    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:18.944558    8582 node_ready.go:49] node "multinode-216000-m02" has status "Ready":"True"
	I0222 20:46:18.944564    8582 node_ready.go:38] duration metric: took 3.007734ms waiting for node "multinode-216000-m02" to be "Ready" ...
	I0222 20:46:18.944569    8582 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 20:46:18.944607    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods
	I0222 20:46:18.944611    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.944617    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.944622    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.947719    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:18.947733    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.947743    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.947752    8582 round_trippers.go:580]     Audit-Id: 9cfab1ea-0452-4235-935e-ae7de4df3621
	I0222 20:46:18.947761    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.947769    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.947777    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.947791    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.948876    8582 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"475"},"items":[{"metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0222 20:46:18.950503    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-48v9r" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.950545    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-48v9r
	I0222 20:46:18.950550    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.950556    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.950562    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.952608    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.952617    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.952623    8582 round_trippers.go:580]     Audit-Id: 1df32b32-d3a0-4ae6-a62b-6ffee63f8bcd
	I0222 20:46:18.952628    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.952652    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.952658    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.952664    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.952670    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.952798    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-48v9r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e6f820e8-bc10-4500-8a19-17a16c982d46","resourceVersion":"422","creationTimestamp":"2023-02-23T04:45:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"68783bca-2409-492f-833f-7eac03547aa3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68783bca-2409-492f-833f-7eac03547aa3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0222 20:46:18.953052    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:18.953059    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.953065    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.953071    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.955527    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.955536    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.955542    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.955546    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.955552    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.955558    8582 round_trippers.go:580]     Audit-Id: ef170e8e-5244-4a78-a42f-2768561564d9
	I0222 20:46:18.955563    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.955571    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.955640    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:18.955830    8582 pod_ready.go:92] pod "coredns-787d4945fb-48v9r" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:18.955835    8582 pod_ready.go:81] duration metric: took 5.323974ms waiting for pod "coredns-787d4945fb-48v9r" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.955841    8582 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.955878    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/etcd-multinode-216000
	I0222 20:46:18.955884    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.955890    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.955895    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.958367    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.958377    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.958383    8582 round_trippers.go:580]     Audit-Id: d81426b0-1296-473d-a4ef-9f51011fd757
	I0222 20:46:18.958388    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.958394    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.958399    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.958404    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.958410    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.958455    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-216000","namespace":"kube-system","uid":"c2b06896-f123-48bd-8603-0d7493488f5c","resourceVersion":"389","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"2d051eb8eb3728481071a1fb944f8fb9","kubernetes.io/config.mirror":"2d051eb8eb3728481071a1fb944f8fb9","kubernetes.io/config.seen":"2023-02-23T04:45:32.257428627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0222 20:46:18.958683    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:18.958689    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.958695    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.958701    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.960893    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.960902    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.960908    8582 round_trippers.go:580]     Audit-Id: bfa68149-876f-4291-8787-2b94f01b62f1
	I0222 20:46:18.960913    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.960918    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.960923    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.960928    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.960933    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.961133    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:18.961322    8582 pod_ready.go:92] pod "etcd-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:18.961328    8582 pod_ready.go:81] duration metric: took 5.483121ms waiting for pod "etcd-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.961341    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.961373    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-216000
	I0222 20:46:18.961378    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.961386    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.961392    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.963651    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.963661    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.963666    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.963671    8582 round_trippers.go:580]     Audit-Id: d78e3e31-7cb7-4746-ae32-bdb4e869b316
	I0222 20:46:18.963677    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.963682    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.963700    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.963709    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.963796    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-216000","namespace":"kube-system","uid":"a28861be-afed-4463-a3c0-e438a5122dc8","resourceVersion":"276","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3327d28d34b6df60d7e253c5892d1f22","kubernetes.io/config.mirror":"3327d28d34b6df60d7e253c5892d1f22","kubernetes.io/config.seen":"2023-02-23T04:45:32.257429393Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0222 20:46:18.964078    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:18.964084    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.964092    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.964097    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.966297    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:18.966305    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.966311    8582 round_trippers.go:580]     Audit-Id: 1030b8a7-65b7-494a-8b3e-ee25fa64c27e
	I0222 20:46:18.966316    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.966321    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.966327    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.966334    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.966340    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.966390    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:18.966581    8582 pod_ready.go:92] pod "kube-apiserver-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:18.966587    8582 pod_ready.go:81] duration metric: took 5.24023ms waiting for pod "kube-apiserver-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.966593    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.966620    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-216000
	I0222 20:46:18.966624    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.966629    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.966635    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.968557    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:18.968568    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.968575    8582 round_trippers.go:580]     Audit-Id: 0e1d7f2b-137b-4df3-9df0-2aadcbbacb16
	I0222 20:46:18.968582    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.968587    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.968592    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.968598    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.968603    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.968665    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-216000","namespace":"kube-system","uid":"a851a311-37aa-46d5-9152-a95acbbc88ec","resourceVersion":"272","creationTimestamp":"2023-02-23T04:45:32Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e1371af7f33022153b0d8ba7783d4fc9","kubernetes.io/config.mirror":"e1371af7f33022153b0d8ba7783d4fc9","kubernetes.io/config.seen":"2023-02-23T04:45:32.257424246Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0222 20:46:18.968925    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:18.968931    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:18.968937    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:18.968942    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:18.970812    8582 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0222 20:46:18.970821    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:18.970826    8582 round_trippers.go:580]     Audit-Id: fab1ed04-ee22-44a1-bcb5-d76f9046f7f4
	I0222 20:46:18.970831    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:18.970837    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:18.970844    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:18.970850    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:18.970855    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:18 GMT
	I0222 20:46:18.970899    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:18.971069    8582 pod_ready.go:92] pod "kube-controller-manager-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:18.971075    8582 pod_ready.go:81] duration metric: took 4.476472ms waiting for pod "kube-controller-manager-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:18.971080    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-46778" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:19.143681    8582 request.go:622] Waited for 172.562109ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:19.143744    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:19.143751    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:19.143760    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:19.143769    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:19.146804    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:19.146815    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:19.146820    8582 round_trippers.go:580]     Audit-Id: cbf86953-050d-4bba-ade2-9de2630b05ba
	I0222 20:46:19.146825    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:19.146830    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:19.146836    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:19.146842    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:19.146846    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:19 GMT
	I0222 20:46:19.146907    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"466","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0222 20:46:19.341622    8582 request.go:622] Waited for 194.490892ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:19.341670    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:19.341675    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:19.341682    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:19.341687    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:19.344475    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:19.344484    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:19.344490    8582 round_trippers.go:580]     Audit-Id: 39093815-173f-4ad3-ad79-0c5d9d8a3ba3
	I0222 20:46:19.344504    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:19.344510    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:19.344515    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:19.344521    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:19.344526    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:19 GMT
	I0222 20:46:19.344597    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:19.846693    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:19.846709    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:19.846717    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:19.846725    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:19.849415    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:19.849424    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:19.849429    8582 round_trippers.go:580]     Audit-Id: 98d2f8f1-6459-4728-bb89-e0f375564544
	I0222 20:46:19.849435    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:19.849445    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:19.849451    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:19.849455    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:19.849460    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:19 GMT
	I0222 20:46:19.849528    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:19.849815    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:19.849822    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:19.849831    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:19.849839    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:19.852041    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:19.852050    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:19.852055    8582 round_trippers.go:580]     Audit-Id: 6e8c45f3-9b6c-45d3-b226-20d6e17614dd
	I0222 20:46:19.852060    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:19.852066    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:19.852070    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:19.852075    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:19.852080    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:19 GMT
	I0222 20:46:19.852245    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:20.346864    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:20.346886    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:20.346905    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:20.346921    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:20.351121    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:46:20.351142    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:20.351152    8582 round_trippers.go:580]     Audit-Id: c8258f38-86b3-4310-b3f4-2bd897ede14e
	I0222 20:46:20.351158    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:20.351165    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:20.351170    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:20.351174    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:20.351179    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:20 GMT
	I0222 20:46:20.351473    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:20.351755    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:20.351762    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:20.351768    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:20.351777    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:20.353995    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:20.354008    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:20.354013    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:20.354018    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:20.354023    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:20.354027    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:20.354032    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:20 GMT
	I0222 20:46:20.354038    8582 round_trippers.go:580]     Audit-Id: b76772b3-39e1-4239-acca-6bbeb1a3418c
	I0222 20:46:20.354109    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:20.846616    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:20.846635    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:20.846644    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:20.846652    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:20.849934    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:20.849944    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:20.849951    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:20.849956    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:20 GMT
	I0222 20:46:20.849960    8582 round_trippers.go:580]     Audit-Id: f7941260-563a-468e-a52d-5bf0bf4e524e
	I0222 20:46:20.849965    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:20.849970    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:20.849975    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:20.850041    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:20.850322    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:20.850331    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:20.850337    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:20.850350    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:20.852772    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:20.852783    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:20.852788    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:20 GMT
	I0222 20:46:20.852824    8582 round_trippers.go:580]     Audit-Id: 847eec7f-3c05-4ceb-a2bb-56f9f4de0cb9
	I0222 20:46:20.852830    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:20.852835    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:20.852839    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:20.852844    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:20.853032    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:21.346841    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:21.346863    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:21.346875    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:21.346886    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:21.350603    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:21.350620    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:21.350632    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:21.350639    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:21.350647    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:21.350654    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:21.350660    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:21 GMT
	I0222 20:46:21.350667    8582 round_trippers.go:580]     Audit-Id: 2dcf2bfc-db06-4c07-8b61-cb087f692f62
	I0222 20:46:21.351229    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:21.351507    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:21.351513    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:21.351519    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:21.351525    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:21.353613    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:21.353622    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:21.353628    8582 round_trippers.go:580]     Audit-Id: b59f483d-5073-410c-99d3-e012ea3f39cb
	I0222 20:46:21.353633    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:21.353638    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:21.353643    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:21.353651    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:21.353656    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:21 GMT
	I0222 20:46:21.353702    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:21.353863    8582 pod_ready.go:102] pod "kube-proxy-46778" in "kube-system" namespace has status "Ready":"False"
	I0222 20:46:21.846722    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:21.846756    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:21.846763    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:21.846768    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:21.849827    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:21.849841    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:21.849847    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:21.849852    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:21 GMT
	I0222 20:46:21.849857    8582 round_trippers.go:580]     Audit-Id: b78c7bcc-be7d-4078-ad7f-2c82c36301fa
	I0222 20:46:21.849866    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:21.849875    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:21.849881    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:21.849974    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"478","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0222 20:46:21.850242    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:21.850248    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:21.850254    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:21.850259    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:21.852583    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:21.852594    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:21.852601    8582 round_trippers.go:580]     Audit-Id: 8572c98f-6fe0-446c-9575-eee15f51a854
	I0222 20:46:21.852612    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:21.852623    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:21.852634    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:21.852642    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:21.852651    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:21 GMT
	I0222 20:46:21.852808    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:22.346661    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-46778
	I0222 20:46:22.346689    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.346703    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.346713    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.349738    8582 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0222 20:46:22.349754    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.349764    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.349772    8582 round_trippers.go:580]     Audit-Id: 87d263c0-76b3-4882-91c6-346a3caa7e3a
	I0222 20:46:22.349780    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.349789    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.349797    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.349806    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.350089    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-46778","generateName":"kube-proxy-","namespace":"kube-system","uid":"aab91623-b577-48c5-8c13-37e00347f038","resourceVersion":"488","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0222 20:46:22.350491    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000-m02
	I0222 20:46:22.350501    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.350511    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.350521    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.353025    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.353041    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.353049    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.353055    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.353059    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.353065    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.353070    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.353076    8582 round_trippers.go:580]     Audit-Id: f66995c1-079c-4b0f-9c28-a9463dba62b6
	I0222 20:46:22.353152    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000-m02","uid":"20d36be8-b083-4138-8041-963fed47453a","resourceVersion":"475","creationTimestamp":"2023-02-23T04:46:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:46:17Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0222 20:46:22.353396    8582 pod_ready.go:92] pod "kube-proxy-46778" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:22.353408    8582 pod_ready.go:81] duration metric: took 3.382361853s waiting for pod "kube-proxy-46778" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.353414    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgxrw" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.353456    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-proxy-fgxrw
	I0222 20:46:22.353461    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.353467    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.353472    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.356032    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.356044    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.356053    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.356065    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.356078    8582 round_trippers.go:580]     Audit-Id: 22dc97bb-cb4c-4bbf-9d47-8c11c650cca8
	I0222 20:46:22.356087    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.356099    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.356106    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.356404    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fgxrw","generateName":"kube-proxy-","namespace":"kube-system","uid":"7402cf62-2944-469b-9c38-0447377d4579","resourceVersion":"393","creationTimestamp":"2023-02-23T04:45:44Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7f888683-93ae-4995-81e9-e2b9c29ecfcf","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f888683-93ae-4995-81e9-e2b9c29ecfcf\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0222 20:46:22.356669    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:22.356676    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.356682    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.356688    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.358924    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.358935    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.358941    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.358945    8582 round_trippers.go:580]     Audit-Id: 22231229-4a0a-4731-862f-45405e118087
	I0222 20:46:22.358950    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.358955    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.358962    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.358969    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.359200    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:22.359400    8582 pod_ready.go:92] pod "kube-proxy-fgxrw" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:22.359406    8582 pod_ready.go:81] duration metric: took 5.986994ms waiting for pod "kube-proxy-fgxrw" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.359413    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.359447    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-216000
	I0222 20:46:22.359452    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.359457    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.359463    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.361952    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.361962    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.361968    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.361973    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.361978    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.361983    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.361988    8582 round_trippers.go:580]     Audit-Id: 81ec07f5-fd93-466c-96c8-71262db3993e
	I0222 20:46:22.361993    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.362043    8582 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-216000","namespace":"kube-system","uid":"a77cec17-0ffa-4b1b-91b0-aa6367fc7848","resourceVersion":"270","creationTimestamp":"2023-02-23T04:45:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e812827214b9960209c3ba4dcd668c3","kubernetes.io/config.mirror":"0e812827214b9960209c3ba4dcd668c3","kubernetes.io/config.seen":"2023-02-23T04:45:22.142158982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T04:45:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0222 20:46:22.542321    8582 request.go:622] Waited for 180.040624ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:22.542413    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes/multinode-216000
	I0222 20:46:22.542423    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.542434    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.542445    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.546662    8582 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0222 20:46:22.546675    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.546681    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.546686    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.546696    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.546700    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.546705    8582 round_trippers.go:580]     Audit-Id: 0bd68af7-a048-4260-a55d-273668ed8a1c
	I0222 20:46:22.546711    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.546772    8582 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T04:45:29Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0222 20:46:22.546974    8582 pod_ready.go:92] pod "kube-scheduler-multinode-216000" in "kube-system" namespace has status "Ready":"True"
	I0222 20:46:22.546979    8582 pod_ready.go:81] duration metric: took 187.56463ms waiting for pod "kube-scheduler-multinode-216000" in "kube-system" namespace to be "Ready" ...
	I0222 20:46:22.546986    8582 pod_ready.go:38] duration metric: took 3.602451551s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 20:46:22.546996    8582 system_svc.go:44] waiting for kubelet service to be running ....
	I0222 20:46:22.547057    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:46:22.557100    8582 system_svc.go:56] duration metric: took 10.100547ms WaitForService to wait for kubelet.
	I0222 20:46:22.557117    8582 kubeadm.go:578] duration metric: took 3.751788104s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0222 20:46:22.557128    8582 node_conditions.go:102] verifying NodePressure condition ...
	I0222 20:46:22.741631    8582 request.go:622] Waited for 184.467541ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51085/api/v1/nodes
	I0222 20:46:22.741670    8582 round_trippers.go:463] GET https://127.0.0.1:51085/api/v1/nodes
	I0222 20:46:22.741677    8582 round_trippers.go:469] Request Headers:
	I0222 20:46:22.741685    8582 round_trippers.go:473]     Accept: application/json, */*
	I0222 20:46:22.741691    8582 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0222 20:46:22.744324    8582 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0222 20:46:22.744334    8582 round_trippers.go:577] Response Headers:
	I0222 20:46:22.744340    8582 round_trippers.go:580]     Audit-Id: 38ce9f3c-238e-4507-b050-635b3ac809a7
	I0222 20:46:22.744345    8582 round_trippers.go:580]     Cache-Control: no-cache, private
	I0222 20:46:22.744350    8582 round_trippers.go:580]     Content-Type: application/json
	I0222 20:46:22.744358    8582 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b5176e8c-d10d-4e4b-b542-c52d31da6c89
	I0222 20:46:22.744363    8582 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad5d7a1b-ff16-447e-aa26-fa90863f5e5a
	I0222 20:46:22.744368    8582 round_trippers.go:580]     Date: Thu, 23 Feb 2023 04:46:22 GMT
	I0222 20:46:22.744458    8582 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"multinode-216000","uid":"b89661d6-ed20-4697-a41a-c4e6516722a7","resourceVersion":"432","creationTimestamp":"2023-02-23T04:45:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-216000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"66d56dc3ac28a702789778ac47e90f12526a0321","minikube.k8s.io/name":"multinode-216000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_22T20_45_33_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10171 chars]
	I0222 20:46:22.744767    8582 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 20:46:22.744774    8582 node_conditions.go:123] node cpu capacity is 6
	I0222 20:46:22.744780    8582 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 20:46:22.744785    8582 node_conditions.go:123] node cpu capacity is 6
	I0222 20:46:22.744789    8582 node_conditions.go:105] duration metric: took 187.659547ms to run NodePressure ...
	I0222 20:46:22.744796    8582 start.go:228] waiting for startup goroutines ...
	I0222 20:46:22.744814    8582 start.go:242] writing updated cluster config ...
	I0222 20:46:22.745146    8582 ssh_runner.go:195] Run: rm -f paused
	I0222 20:46:22.784119    8582 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0222 20:46:22.807369    8582 out.go:177] * Done! kubectl is now configured to use "multinode-216000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 04:45:13 UTC, end at Thu 2023-02-23 04:46:35 UTC. --
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499542152Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499564051Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499572853Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499632214Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499654897Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499703557Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499723024Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499737981Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499759117Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499973420Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.499997160Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.500427055Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.507259663Z" level=info msg="Loading containers: start."
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.587748519Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.621278170Z" level=info msg="Loading containers: done."
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.630089727Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.630149440Z" level=info msg="Daemon has completed initialization"
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.651125010Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 04:45:17 multinode-216000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.655811050Z" level=info msg="API listen on [::]:2376"
	Feb 23 04:45:17 multinode-216000 dockerd[831]: time="2023-02-23T04:45:17.662886613Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 04:45:59 multinode-216000 dockerd[831]: time="2023-02-23T04:45:59.752398209Z" level=info msg="ignoring event" container=c55ff201a3beafc9c7019ee48716439f5997eba482a3bdfec5f22e3fa91db8a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 04:45:59 multinode-216000 dockerd[831]: time="2023-02-23T04:45:59.858374176Z" level=info msg="ignoring event" container=fbcd20014202d62fae727a61457015133a4625ca6c475ea4175764118df8ca5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 04:46:00 multinode-216000 dockerd[831]: time="2023-02-23T04:46:00.749070438Z" level=info msg="ignoring event" container=027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 04:46:00 multinode-216000 dockerd[831]: time="2023-02-23T04:46:00.815707996Z" level=info msg="ignoring event" container=1f23609febdb93b06584e2b8dcfd321b7de2e61770d21055d57f831e411a6658 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	d817db693e5ff       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 seconds ago        Running             busybox                   0                   cf4fffb0d75b0
	fb3f53c39a6de       5185b96f0becf                                                                                         35 seconds ago       Running             coredns                   1                   83ecfda61b7c3
	fbcd25148deb8       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              47 seconds ago       Running             kindnet-cni               0                   9a583941b0c3a
	92a561568dbbc       6e38f40d628db                                                                                         49 seconds ago       Running             storage-provisioner       0                   2101cb58e3875
	c55ff201a3bea       5185b96f0becf                                                                                         49 seconds ago       Exited              coredns                   0                   fbcd20014202d
	88291aae322ac       46a6bb3c77ce0                                                                                         50 seconds ago       Running             kube-proxy                0                   018b2cd0c3e66
	6b81e4fbf6fb8       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   428f6e799d799
	7e0db19194ff3       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   b75a9eb44907f
	f3b7205a3e76d       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   bc028811fdb89
	ab226fd8fda30       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   16a57a2f27e7d
	
	* 
	* ==> coredns [c55ff201a3be] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 3457779542163645706.7867643797966139542. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 3457779542163645706.7867643797966139542. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> coredns [fb3f53c39a6d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:38494 - 5691 "HINFO IN 8213277836580515030.8808638030112362167. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014422298s
	[INFO] 10.244.0.3:53106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171719s
	[INFO] 10.244.0.3:47359 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.004571152s
	[INFO] 10.244.0.3:57995 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00382021s
	[INFO] 10.244.0.3:43987 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006539722s
	[INFO] 10.244.0.3:41933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137221s
	[INFO] 10.244.0.3:52335 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006567973s
	[INFO] 10.244.0.3:34458 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00016992s
	[INFO] 10.244.0.3:50072 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127485s
	[INFO] 10.244.0.3:55564 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003918775s
	[INFO] 10.244.0.3:48263 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132491s
	[INFO] 10.244.0.3:36574 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096616s
	[INFO] 10.244.0.3:34001 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084501s
	[INFO] 10.244.0.3:58616 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099802s
	[INFO] 10.244.0.3:39839 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069938s
	[INFO] 10.244.0.3:60998 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080043s
	[INFO] 10.244.0.3:60140 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127766s
	[INFO] 10.244.0.3:47742 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013644s
	[INFO] 10.244.0.3:43935 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143937s
	[INFO] 10.244.0.3:50227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126624s
	[INFO] 10.244.0.3:60272 - 5 "PTR IN 2.65.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106885s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-216000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-216000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321
	                    minikube.k8s.io/name=multinode-216000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_22T20_45_33_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 04:45:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-216000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 04:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 04:46:34 +0000   Thu, 23 Feb 2023 04:45:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 04:46:34 +0000   Thu, 23 Feb 2023 04:45:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 04:46:34 +0000   Thu, 23 Feb 2023 04:45:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 04:46:34 +0000   Thu, 23 Feb 2023 04:45:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-216000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    14aace2c-fe48-40d9-b364-15d456a94896
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-c4gl8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kube-system                 coredns-787d4945fb-48v9r                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     50s
	  kube-system                 etcd-multinode-216000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 kindnet-m7gzm                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      51s
	  kube-system                 kube-apiserver-multinode-216000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-controller-manager-multinode-216000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-proxy-fgxrw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-scheduler-multinode-216000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 49s   kube-proxy       
	  Normal  Starting                 63s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  63s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  63s   kubelet          Node multinode-216000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s   kubelet          Node multinode-216000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s   kubelet          Node multinode-216000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s   node-controller  Node multinode-216000 event: Registered Node multinode-216000 in Controller
	
	
	Name:               multinode-216000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-216000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 04:46:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-216000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 04:46:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 04:46:18 +0000   Thu, 23 Feb 2023 04:46:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 04:46:18 +0000   Thu, 23 Feb 2023 04:46:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 04:46:18 +0000   Thu, 23 Feb 2023 04:46:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 04:46:18 +0000   Thu, 23 Feb 2023 04:46:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-216000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    14aace2c-fe48-40d9-b364-15d456a94896
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-mhxxv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kube-system                 kindnet-7vj2s               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      18s
	  kube-system                 kube-proxy-46778            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 18s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x2 over 18s)  kubelet          Node multinode-216000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x2 over 18s)  kubelet          Node multinode-216000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x2 over 18s)  kubelet          Node multinode-216000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                17s                kubelet          Node multinode-216000-m02 status is now: NodeReady
	  Normal  RegisteredNode           16s                node-controller  Node multinode-216000-m02 event: Registered Node multinode-216000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000081] FS-Cache: O-key=[8] '9b91130600000000'
	[  +0.000132] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000083] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=00000000defe59bd
	[  +0.000064] FS-Cache: N-key=[8] '9b91130600000000'
	[  +0.003548] FS-Cache: Duplicate cookie detected
	[  +0.000041] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000053] FS-Cache: O-cookie d=00000000d375b396{9p.inode} n=00000000431c20f9
	[  +0.000062] FS-Cache: O-key=[8] '9b91130600000000'
	[  +0.000127] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000080] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=0000000013b0fbbe
	[  +0.000045] FS-Cache: N-key=[8] '9b91130600000000'
	[  +3.557940] FS-Cache: Duplicate cookie detected
	[  +0.000036] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000054] FS-Cache: O-cookie d=00000000d375b396{9p.inode} n=000000005612b0fe
	[  +0.000059] FS-Cache: O-key=[8] '9a91130600000000'
	[  +0.000042] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000042] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=000000007465c420
	[  +0.000051] FS-Cache: N-key=[8] '9a91130600000000'
	[  +0.500925] FS-Cache: Duplicate cookie detected
	[  +0.000054] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000033] FS-Cache: O-cookie d=00000000d375b396{9p.inode} n=0000000059e8f346
	[  +0.000062] FS-Cache: O-key=[8] 'b991130600000000'
	[  +0.000047] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000043] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=000000007c126f1c
	[  +0.000043] FS-Cache: N-key=[8] 'b991130600000000'
	
	* 
	* ==> etcd [ab226fd8fda3] <==
	* {"level":"info","ts":"2023-02-23T04:45:27.055Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T04:45:27.055Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T04:45:27.055Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T04:45:27.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-02-23T04:45:27.056Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T04:45:27.348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-216000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T04:45:27.350Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T04:45:27.351Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T04:45:27.351Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T04:45:27.351Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T04:45:27.351Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T04:45:27.352Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-23T04:46:07.245Z","caller":"traceutil/trace.go:171","msg":"trace[145973251] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"197.676001ms","start":"2023-02-23T04:46:07.047Z","end":"2023-02-23T04:46:07.245Z","steps":["trace[145973251] 'process raft request'  (duration: 197.575679ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T04:46:09.495Z","caller":"traceutil/trace.go:171","msg":"trace[172939223] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"243.247847ms","start":"2023-02-23T04:46:09.252Z","end":"2023-02-23T04:46:09.495Z","steps":["trace[172939223] 'process raft request'  (duration: 243.077813ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  04:46:36 up 45 min,  0 users,  load average: 2.09, 1.57, 0.97
	Linux multinode-216000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [fbcd25148deb] <==
	* I0223 04:45:48.822497       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0223 04:45:48.822551       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0223 04:45:48.822650       1 main.go:116] setting mtu 1500 for CNI 
	I0223 04:45:48.822686       1 main.go:146] kindnetd IP family: "ipv4"
	I0223 04:45:48.822699       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0223 04:45:49.316901       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:45:49.316949       1 main.go:227] handling current node
	I0223 04:45:59.424569       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:45:59.424609       1 main.go:227] handling current node
	I0223 04:46:09.497140       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:46:09.497200       1 main.go:227] handling current node
	I0223 04:46:19.501812       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:46:19.501852       1 main.go:227] handling current node
	I0223 04:46:19.501861       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 04:46:19.501865       1 main.go:250] Node multinode-216000-m02 has CIDR [10.244.1.0/24] 
	I0223 04:46:19.502030       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0223 04:46:29.509653       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 04:46:29.509696       1 main.go:227] handling current node
	I0223 04:46:29.509704       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 04:46:29.509709       1 main.go:250] Node multinode-216000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [f3b7205a3e76] <==
	* I0223 04:45:29.247157       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 04:45:29.247344       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 04:45:29.247681       1 cache.go:39] Caches are synced for autoregister controller
	I0223 04:45:29.247906       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 04:45:29.248031       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 04:45:29.249517       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 04:45:29.249533       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 04:45:29.250361       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 04:45:29.263698       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 04:45:29.964125       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 04:45:30.152043       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0223 04:45:30.154737       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0223 04:45:30.154831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 04:45:30.574866       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 04:45:30.637844       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 04:45:30.689286       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0223 04:45:30.694357       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0223 04:45:30.695102       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 04:45:30.698683       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0223 04:45:31.182895       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 04:45:32.146672       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 04:45:32.154470       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0223 04:45:32.161585       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 04:45:44.338211       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0223 04:45:44.837385       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [6b81e4fbf6fb] <==
	* I0223 04:45:44.187559       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0223 04:45:44.187627       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0223 04:45:44.221072       1 shared_informer.go:280] Caches are synced for disruption
	I0223 04:45:44.233337       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 04:45:44.240945       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 04:45:44.341529       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0223 04:45:44.554337       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 04:45:44.598648       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 04:45:44.598698       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0223 04:45:44.844139       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fgxrw"
	I0223 04:45:44.845869       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-m7gzm"
	I0223 04:45:44.944557       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0223 04:45:45.064311       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-j4pt7"
	I0223 04:45:45.076675       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-48v9r"
	I0223 04:45:45.144962       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-j4pt7"
	W0223 04:46:17.605968       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-216000-m02" does not exist
	I0223 04:46:17.609169       1 range_allocator.go:372] Set node multinode-216000-m02 PodCIDR to [10.244.1.0/24]
	I0223 04:46:17.612695       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7vj2s"
	I0223 04:46:17.612992       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-46778"
	W0223 04:46:18.219641       1 topologycache.go:232] Can't get CPU or zone information for multinode-216000-m02 node
	W0223 04:46:19.043278       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-216000-m02. Assuming now as a timestamp.
	I0223 04:46:19.043567       1 event.go:294] "Event occurred" object="multinode-216000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-216000-m02 event: Registered Node multinode-216000-m02 in Controller"
	I0223 04:46:23.783575       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 04:46:23.839991       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-mhxxv"
	I0223 04:46:23.852273       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-c4gl8"
	
	* 
	* ==> kube-proxy [88291aae322a] <==
	* I0223 04:45:45.846443       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0223 04:45:45.846531       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0223 04:45:45.846552       1 server_others.go:535] "Using iptables proxy"
	I0223 04:45:45.929802       1 server_others.go:176] "Using iptables Proxier"
	I0223 04:45:45.929850       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0223 04:45:45.929857       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0223 04:45:45.929873       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0223 04:45:45.929896       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 04:45:45.930339       1 server.go:655] "Version info" version="v1.26.1"
	I0223 04:45:45.930378       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 04:45:45.936478       1 config.go:317] "Starting service config controller"
	I0223 04:45:45.936505       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 04:45:45.936611       1 config.go:444] "Starting node config controller"
	I0223 04:45:45.936617       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 04:45:45.936756       1 config.go:226] "Starting endpoint slice config controller"
	I0223 04:45:45.936766       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 04:45:46.037358       1 shared_informer.go:280] Caches are synced for node config
	I0223 04:45:46.037404       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 04:45:46.037415       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [7e0db19194ff] <==
	* W0223 04:45:29.222047       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0223 04:45:29.222098       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0223 04:45:29.222182       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 04:45:29.222239       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 04:45:29.222181       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0223 04:45:29.222310       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0223 04:45:29.222405       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 04:45:29.222416       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 04:45:29.222631       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0223 04:45:29.222689       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0223 04:45:30.152931       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0223 04:45:30.152952       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0223 04:45:30.180144       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 04:45:30.180189       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0223 04:45:30.180823       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 04:45:30.180862       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 04:45:30.242456       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 04:45:30.242503       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 04:45:30.276564       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 04:45:30.276608       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 04:45:30.318777       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 04:45:30.318844       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 04:45:30.577855       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0223 04:45:30.577877       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0223 04:45:33.182849       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 04:45:13 UTC, end at Thu 2023-02-23 04:46:36 UTC. --
	Feb 23 04:45:48 multinode-216000 kubelet[2181]: I0223 04:45:48.061397    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-j4pt7" podStartSLOduration=3.061370814 pod.CreationTimestamp="2023-02-23 04:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:45:48.061045884 +0000 UTC m=+15.931163573" watchObservedRunningTime="2023-02-23 04:45:48.061370814 +0000 UTC m=+15.931488509"
	Feb 23 04:45:48 multinode-216000 kubelet[2181]: I0223 04:45:48.517819    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fgxrw" podStartSLOduration=4.517759268 pod.CreationTimestamp="2023-02-23 04:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:45:48.517541007 +0000 UTC m=+16.387658717" watchObservedRunningTime="2023-02-23 04:45:48.517759268 +0000 UTC m=+16.387876977"
	Feb 23 04:45:49 multinode-216000 kubelet[2181]: I0223 04:45:49.262988    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.262959767 pod.CreationTimestamp="2023-02-23 04:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:45:48.86094261 +0000 UTC m=+16.731060299" watchObservedRunningTime="2023-02-23 04:45:49.262959767 +0000 UTC m=+17.133077455"
	Feb 23 04:45:49 multinode-216000 kubelet[2181]: I0223 04:45:49.263096    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-m7gzm" podStartSLOduration=-9.223372031591692e+09 pod.CreationTimestamp="2023-02-23 04:45:44 +0000 UTC" firstStartedPulling="2023-02-23 04:45:45.746711806 +0000 UTC m=+13.616829491" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:45:49.262844914 +0000 UTC m=+17.132962604" watchObservedRunningTime="2023-02-23 04:45:49.263083637 +0000 UTC m=+17.133201326"
	Feb 23 04:45:53 multinode-216000 kubelet[2181]: I0223 04:45:53.246477    2181 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 23 04:45:53 multinode-216000 kubelet[2181]: I0223 04:45:53.247416    2181 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.261501    2181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbcd20014202d62fae727a61457015133a4625ca6c475ea4175764118df8ca5d"
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.261544    2181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83ecfda61b7c397560d774e70af16d14bf264b3bc61aabeedc234596f9ce2aea"
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.971588    2181 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f320eeed-16b9-4969-b449-323abb78b55f-config-volume\") pod \"f320eeed-16b9-4969-b449-323abb78b55f\" (UID: \"f320eeed-16b9-4969-b449-323abb78b55f\") "
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.971694    2181 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj7nd\" (UniqueName: \"kubernetes.io/projected/f320eeed-16b9-4969-b449-323abb78b55f-kube-api-access-cj7nd\") pod \"f320eeed-16b9-4969-b449-323abb78b55f\" (UID: \"f320eeed-16b9-4969-b449-323abb78b55f\") "
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: W0223 04:46:00.972232    2181 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f320eeed-16b9-4969-b449-323abb78b55f/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.972522    2181 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f320eeed-16b9-4969-b449-323abb78b55f-config-volume" (OuterVolumeSpecName: "config-volume") pod "f320eeed-16b9-4969-b449-323abb78b55f" (UID: "f320eeed-16b9-4969-b449-323abb78b55f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 23 04:46:00 multinode-216000 kubelet[2181]: I0223 04:46:00.974609    2181 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f320eeed-16b9-4969-b449-323abb78b55f-kube-api-access-cj7nd" (OuterVolumeSpecName: "kube-api-access-cj7nd") pod "f320eeed-16b9-4969-b449-323abb78b55f" (UID: "f320eeed-16b9-4969-b449-323abb78b55f"). InnerVolumeSpecName "kube-api-access-cj7nd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.072080    2181 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f320eeed-16b9-4969-b449-323abb78b55f-config-volume\") on node \"multinode-216000\" DevicePath \"\""
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.072182    2181 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-cj7nd\" (UniqueName: \"kubernetes.io/projected/f320eeed-16b9-4969-b449-323abb78b55f-kube-api-access-cj7nd\") on node \"multinode-216000\" DevicePath \"\""
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.285702    2181 scope.go:115] "RemoveContainer" containerID="027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13"
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.299321    2181 scope.go:115] "RemoveContainer" containerID="027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13"
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: E0223 04:46:01.299992    2181 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13" containerID="027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13"
	Feb 23 04:46:01 multinode-216000 kubelet[2181]: I0223 04:46:01.300035    2181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13} err="failed to get container status \"027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13\": rpc error: code = Unknown desc = Error: No such container: 027b7b4383416ed23f7290faa237d2c8bd3b901979741084b611c3581da20f13"
	Feb 23 04:46:02 multinode-216000 kubelet[2181]: I0223 04:46:02.358773    2181 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f320eeed-16b9-4969-b449-323abb78b55f path="/var/lib/kubelet/pods/f320eeed-16b9-4969-b449-323abb78b55f/volumes"
	Feb 23 04:46:23 multinode-216000 kubelet[2181]: I0223 04:46:23.864736    2181 topology_manager.go:210] "Topology Admit Handler"
	Feb 23 04:46:23 multinode-216000 kubelet[2181]: E0223 04:46:23.864809    2181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f320eeed-16b9-4969-b449-323abb78b55f" containerName="coredns"
	Feb 23 04:46:23 multinode-216000 kubelet[2181]: I0223 04:46:23.864843    2181 memory_manager.go:346] "RemoveStaleState removing state" podUID="f320eeed-16b9-4969-b449-323abb78b55f" containerName="coredns"
	Feb 23 04:46:24 multinode-216000 kubelet[2181]: I0223 04:46:24.031828    2181 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp78h\" (UniqueName: \"kubernetes.io/projected/d3e6682b-35f9-4054-bf12-86ca2b50d6ad-kube-api-access-wp78h\") pod \"busybox-6b86dd6d48-c4gl8\" (UID: \"d3e6682b-35f9-4054-bf12-86ca2b50d6ad\") " pod="default/busybox-6b86dd6d48-c4gl8"
	Feb 23 04:46:27 multinode-216000 kubelet[2181]: I0223 04:46:27.458885    2181 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-c4gl8" podStartSLOduration=-9.22337203239592e+09 pod.CreationTimestamp="2023-02-23 04:46:23 +0000 UTC" firstStartedPulling="2023-02-23 04:46:24.411232126 +0000 UTC m=+52.281697299" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 04:46:27.458738496 +0000 UTC m=+55.329528276" watchObservedRunningTime="2023-02-23 04:46:27.458855135 +0000 UTC m=+55.329644921"
	
	* 
	* ==> storage-provisioner [92a561568dbb] <==
	* I0223 04:45:46.870515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0223 04:45:46.920473       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0223 04:45:46.920593       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0223 04:45:46.928535       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0223 04:45:46.928644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed37d3f7-cde9-4eac-aad3-316d2cb56d11", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-216000_8f2a23e5-0bc1-4427-bb86-23d8c8c27eb8 became leader
	I0223 04:45:46.928692       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-216000_8f2a23e5-0bc1-4427-bb86-23d8c8c27eb8!
	I0223 04:45:47.028878       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-216000_8f2a23e5-0bc1-4427-bb86-23d8c8c27eb8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-216000 -n multinode-216000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-216000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3343333602.exe start -p running-upgrade-762000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3343333602.exe start -p running-upgrade-762000 --memory=2200 --vm-driver=docker : exit status 70 (54.374997703s)

                                                
                                                
-- stdout --
	! [running-upgrade-762000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1462636464
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 04:59:19.328631139 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-762000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 04:59:38.555425967 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-762000", then "minikube start -p running-upgrade-762000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.29.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.29.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 178.82 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.89 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 13.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 27.05 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.91 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 68.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 110.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 151.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 229.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 243.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 257.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 271.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 285.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 298.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 326.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 334.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 346.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 359.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 373.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 415.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 429.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 443.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 484.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 498.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 520.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 533.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 04:59:38.555425967 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3343333602.exe start -p running-upgrade-762000 --memory=2200 --vm-driver=docker 
E0222 20:59:43.056932    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3343333602.exe start -p running-upgrade-762000 --memory=2200 --vm-driver=docker : exit status 70 (4.314006044s)

                                                
                                                
-- stdout --
	* [running-upgrade-762000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3883074490
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-762000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3343333602.exe start -p running-upgrade-762000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3343333602.exe start -p running-upgrade-762000 --memory=2200 --vm-driver=docker : exit status 70 (4.230959564s)

                                                
                                                
-- stdout --
	* [running-upgrade-762000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1147480080
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-762000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-02-22 20:59:52.378939 -0800 PST m=+2282.668272670
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-762000
helpers_test.go:235: (dbg) docker inspect running-upgrade-762000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d60a226c9a2dd05ff227a65945758a1dc832903da0a48925c0ebe2e3509fdfdf",
	        "Created": "2023-02-23T04:59:27.54943288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 170591,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T04:59:27.785039936Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/d60a226c9a2dd05ff227a65945758a1dc832903da0a48925c0ebe2e3509fdfdf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d60a226c9a2dd05ff227a65945758a1dc832903da0a48925c0ebe2e3509fdfdf/hostname",
	        "HostsPath": "/var/lib/docker/containers/d60a226c9a2dd05ff227a65945758a1dc832903da0a48925c0ebe2e3509fdfdf/hosts",
	        "LogPath": "/var/lib/docker/containers/d60a226c9a2dd05ff227a65945758a1dc832903da0a48925c0ebe2e3509fdfdf/d60a226c9a2dd05ff227a65945758a1dc832903da0a48925c0ebe2e3509fdfdf-json.log",
	        "Name": "/running-upgrade-762000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-762000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fe061f040b6b7b94abaf3d1625cd965f67a242c42ca7c9c93e3d32bb66f84322-init/diff:/var/lib/docker/overlay2/3809bca7bbca31396676a567d8cbe8022543aa2fc7f8e1f35de623c1eb8f082c/diff:/var/lib/docker/overlay2/7001dfd98a66ae7d206f8987ed718dcb859bbeffba7889774896583e23a70be1/diff:/var/lib/docker/overlay2/3e2cba4e745744bab9fc827a2d7a5199fac7789d76a4facb78222078e4a585a0/diff:/var/lib/docker/overlay2/f09668468bd4667efac9aeaa9d511cbe2c0debe927d14f4ca4d2aa8ff6b7fce5/diff:/var/lib/docker/overlay2/485e4fe1c68a1f59490773170f989f8d0d2cba63452a4212d0684a11047bb198/diff:/var/lib/docker/overlay2/a0baaf5e1ef2c08611311992793a0826620f8353760ad43a4c67ebc2b59d6fe3/diff:/var/lib/docker/overlay2/8385b8aa04f58288a2be68f7088a8fdc84de87fa69443d398684880ff81e3539/diff:/var/lib/docker/overlay2/232086d746b0b4f53939037276e587a36adc633928f67cef6069ad9ef7edf129/diff:/var/lib/docker/overlay2/d10ec2445d5bb316752ece7f1716309fd649d76ee7c83f76896fab522f478ac0/diff:/var/lib/docker/overlay2/b847fb
4f6755a5b58ce60944e330646ac169caaa5cdc4c5a8019b76e24591b0c/diff:/var/lib/docker/overlay2/193a2c6d5ad0db4bfcb6f97ed5d474004348e4cbf66e61af7c3830e9839eda3c/diff:/var/lib/docker/overlay2/881021416a6946d1219c033d4b36022bd9533de329077c4e88d6e2dc466a3436/diff:/var/lib/docker/overlay2/edd49e29d6a52b87c75d59949320122c4bbcfa8eacc889eb325e5eaea003438e/diff:/var/lib/docker/overlay2/e8a183e5f2e1e64fa7f5b289b2e9db45676df1f7bd22effd06c5b7c6cacd3830/diff:/var/lib/docker/overlay2/5f76c205b1257281d0378e1d3004cc1dad398403b5cb45cb3e7d7ca89ffa6479/diff:/var/lib/docker/overlay2/30b9f978bf14c9c9ee8054b0344b28407ceea4febe6689782544b465847bc927/diff:/var/lib/docker/overlay2/7e737a2172758df4045b0e9accf71b33f6a919c4cc3c489d3852df9ca26863fe/diff:/var/lib/docker/overlay2/962dad0c4c8f3b1848af61a35084296d991fa7018ca46d3913d4f6dc2f0eeb4d/diff:/var/lib/docker/overlay2/cfc9515ab9b140dd3b8195b2930c8cff1cddcb712151b7510ca528e9952f4d93/diff:/var/lib/docker/overlay2/5e8d14faff3855891be36b221d1cffdd00638db060ff50e8b928760b348f40f5/diff:/var/lib/d
ocker/overlay2/395eb4b2380c1656ffafea4d8ec3deca3a5ab69ec638f821bb7a9c20aeb2eee0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe061f040b6b7b94abaf3d1625cd965f67a242c42ca7c9c93e3d32bb66f84322/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe061f040b6b7b94abaf3d1625cd965f67a242c42ca7c9c93e3d32bb66f84322/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe061f040b6b7b94abaf3d1625cd965f67a242c42ca7c9c93e3d32bb66f84322/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-762000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-762000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-762000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-762000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-762000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9632aacf63c89c4a234d48098f8a4f5dfdf53b0b57cccc95ca2b8dee4a0bfb9d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52368"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52369"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52367"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9632aacf63c8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "f57701079c3a62d1450222340c55dff45065427971a601c04d579980ab01ae7b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "243d1519a56171faf9c3f743a09d8363a9221131ce4b0ebb491009903f325875",
	                    "EndpointID": "f57701079c3a62d1450222340c55dff45065427971a601c04d579980ab01ae7b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-762000 -n running-upgrade-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-762000 -n running-upgrade-762000: exit status 6 (384.253129ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 20:59:52.825398   13350 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-762000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-762000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-762000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-762000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-762000: (2.346498647s)
--- FAIL: TestRunningBinaryUpgrade (70.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (584.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-038000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0222 21:00:59.493971    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:00:59.499119    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:00:59.509219    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:00:59.529649    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:00:59.570604    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:00:59.652692    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:00:59.812879    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:01:00.133051    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:01:00.773185    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:01:02.053325    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:01:04.613362    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:01:09.733592    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:01:19.974396    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-038000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m10.288760767s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-038000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-038000 in cluster kubernetes-upgrade-038000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 21:00:55.684341   13749 out.go:296] Setting OutFile to fd 1 ...
	I0222 21:00:55.684511   13749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:00:55.684516   13749 out.go:309] Setting ErrFile to fd 2...
	I0222 21:00:55.684520   13749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:00:55.684634   13749 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 21:00:55.686013   13749 out.go:303] Setting JSON to false
	I0222 21:00:55.704765   13749 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3630,"bootTime":1677124825,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 21:00:55.704862   13749 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 21:00:55.726411   13749 out.go:177] * [kubernetes-upgrade-038000] minikube v1.29.0 on Darwin 13.2
	I0222 21:00:55.769918   13749 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 21:00:55.769912   13749 notify.go:220] Checking for updates...
	I0222 21:00:55.791544   13749 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:00:55.812727   13749 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 21:00:55.834551   13749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 21:00:55.855505   13749 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 21:00:55.876892   13749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 21:00:55.900346   13749 config.go:182] Loaded profile config "cert-expiration-370000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:00:55.900406   13749 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 21:00:55.960134   13749 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 21:00:55.960241   13749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:00:56.102614   13749 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:00:56.010382059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:00:56.124559   13749 out.go:177] * Using the docker driver based on user configuration
	I0222 21:00:56.146295   13749 start.go:296] selected driver: docker
	I0222 21:00:56.146327   13749 start.go:857] validating driver "docker" against <nil>
	I0222 21:00:56.146345   13749 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 21:00:56.150049   13749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:00:56.290914   13749 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:00:56.199151563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:00:56.291055   13749 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0222 21:00:56.291237   13749 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0222 21:00:56.313026   13749 out.go:177] * Using Docker Desktop driver with root privileges
	I0222 21:00:56.334541   13749 cni.go:84] Creating CNI manager for ""
	I0222 21:00:56.334652   13749 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0222 21:00:56.334667   13749 start_flags.go:319] config:
	{Name:kubernetes-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-038000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:00:56.377806   13749 out.go:177] * Starting control plane node kubernetes-upgrade-038000 in cluster kubernetes-upgrade-038000
	I0222 21:00:56.398625   13749 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 21:00:56.419674   13749 out.go:177] * Pulling base image ...
	I0222 21:00:56.461560   13749 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 21:00:56.461658   13749 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0222 21:00:56.461664   13749 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 21:00:56.461680   13749 cache.go:57] Caching tarball of preloaded images
	I0222 21:00:56.461911   13749 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 21:00:56.461930   13749 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0222 21:00:56.463006   13749 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/config.json ...
	I0222 21:00:56.463160   13749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/config.json: {Name:mk2b188fcfdb89a1d5bcdd049cff6058091274f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:00:56.518227   13749 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 21:00:56.518254   13749 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 21:00:56.518479   13749 cache.go:193] Successfully downloaded all kic artifacts
	I0222 21:00:56.518520   13749 start.go:364] acquiring machines lock for kubernetes-upgrade-038000: {Name:mk53bee5973f8cb285d9d9235307ee3ee077de7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 21:00:56.518673   13749 start.go:368] acquired machines lock for "kubernetes-upgrade-038000" in 142.184µs
	I0222 21:00:56.518709   13749 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-038000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 21:00:56.518801   13749 start.go:125] createHost starting for "" (driver="docker")
	I0222 21:00:56.560444   13749 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0222 21:00:56.560914   13749 start.go:159] libmachine.API.Create for "kubernetes-upgrade-038000" (driver="docker")
	I0222 21:00:56.560963   13749 client.go:168] LocalClient.Create starting
	I0222 21:00:56.561193   13749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem
	I0222 21:00:56.561253   13749 main.go:141] libmachine: Decoding PEM data...
	I0222 21:00:56.561278   13749 main.go:141] libmachine: Parsing certificate...
	I0222 21:00:56.561359   13749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem
	I0222 21:00:56.561401   13749 main.go:141] libmachine: Decoding PEM data...
	I0222 21:00:56.561413   13749 main.go:141] libmachine: Parsing certificate...
	I0222 21:00:56.561969   13749 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-038000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0222 21:00:56.617121   13749 cli_runner.go:211] docker network inspect kubernetes-upgrade-038000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0222 21:00:56.617237   13749 network_create.go:281] running [docker network inspect kubernetes-upgrade-038000] to gather additional debugging logs...
	I0222 21:00:56.617254   13749 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-038000
	W0222 21:00:56.670689   13749 cli_runner.go:211] docker network inspect kubernetes-upgrade-038000 returned with exit code 1
	I0222 21:00:56.670724   13749 network_create.go:284] error running [docker network inspect kubernetes-upgrade-038000]: docker network inspect kubernetes-upgrade-038000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-038000
	I0222 21:00:56.670742   13749 network_create.go:286] output of [docker network inspect kubernetes-upgrade-038000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-038000
	
	** /stderr **
	I0222 21:00:56.670845   13749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0222 21:00:56.727217   13749 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0222 21:00:56.727561   13749 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010eefa0}
	I0222 21:00:56.727577   13749 network_create.go:123] attempt to create docker network kubernetes-upgrade-038000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0222 21:00:56.727645   13749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 kubernetes-upgrade-038000
	W0222 21:00:56.782982   13749 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 kubernetes-upgrade-038000 returned with exit code 1
	W0222 21:00:56.783014   13749 network_create.go:148] failed to create docker network kubernetes-upgrade-038000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 kubernetes-upgrade-038000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0222 21:00:56.783038   13749 network_create.go:115] failed to create docker network kubernetes-upgrade-038000 192.168.58.0/24, will retry: subnet is taken
	I0222 21:00:56.784489   13749 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0222 21:00:56.784816   13749 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010efdf0}
	I0222 21:00:56.784831   13749 network_create.go:123] attempt to create docker network kubernetes-upgrade-038000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0222 21:00:56.784902   13749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 kubernetes-upgrade-038000
	W0222 21:00:56.839396   13749 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 kubernetes-upgrade-038000 returned with exit code 1
	W0222 21:00:56.839437   13749 network_create.go:148] failed to create docker network kubernetes-upgrade-038000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 kubernetes-upgrade-038000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0222 21:00:56.839457   13749 network_create.go:115] failed to create docker network kubernetes-upgrade-038000 192.168.67.0/24, will retry: subnet is taken
	I0222 21:00:56.840965   13749 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0222 21:00:56.841303   13749 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000dcca70}
	I0222 21:00:56.841316   13749 network_create.go:123] attempt to create docker network kubernetes-upgrade-038000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0222 21:00:56.841389   13749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 kubernetes-upgrade-038000
	I0222 21:00:56.929172   13749 network_create.go:107] docker network kubernetes-upgrade-038000 192.168.76.0/24 created
	I0222 21:00:56.929200   13749 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-038000" container
	I0222 21:00:56.929332   13749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0222 21:00:56.985862   13749 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-038000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 --label created_by.minikube.sigs.k8s.io=true
	I0222 21:00:57.039896   13749 oci.go:103] Successfully created a docker volume kubernetes-upgrade-038000
	I0222 21:00:57.040021   13749 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-038000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 --entrypoint /usr/bin/test -v kubernetes-upgrade-038000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0222 21:00:57.580265   13749 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-038000
	I0222 21:00:57.580299   13749 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 21:00:57.580314   13749 kic.go:190] Starting extracting preloaded images to volume ...
	I0222 21:00:57.580423   13749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-038000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0222 21:01:03.314745   13749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-038000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (5.734370884s)
	I0222 21:01:03.314765   13749 kic.go:199] duration metric: took 5.734566 seconds to extract preloaded images to volume
	I0222 21:01:03.314886   13749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0222 21:01:03.496247   13749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-038000 --name kubernetes-upgrade-038000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-038000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-038000 --network kubernetes-upgrade-038000 --ip 192.168.76.2 --volume kubernetes-upgrade-038000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0222 21:01:03.862044   13749 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-038000 --format={{.State.Running}}
	I0222 21:01:03.924162   13749 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-038000 --format={{.State.Status}}
	I0222 21:01:03.990053   13749 cli_runner.go:164] Run: docker exec kubernetes-upgrade-038000 stat /var/lib/dpkg/alternatives/iptables
	I0222 21:01:04.113424   13749 oci.go:144] the created container "kubernetes-upgrade-038000" has a running status.
	I0222 21:01:04.113506   13749 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa...
	I0222 21:01:04.282843   13749 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0222 21:01:04.389208   13749 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-038000 --format={{.State.Status}}
	I0222 21:01:04.446997   13749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0222 21:01:04.447014   13749 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-038000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0222 21:01:04.554864   13749 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-038000 --format={{.State.Status}}
	I0222 21:01:04.613082   13749 machine.go:88] provisioning docker machine ...
	I0222 21:01:04.613127   13749 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-038000"
	I0222 21:01:04.613247   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:04.671890   13749 main.go:141] libmachine: Using SSH client type: native
	I0222 21:01:04.672283   13749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52481 <nil> <nil>}
	I0222 21:01:04.672297   13749 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-038000 && echo "kubernetes-upgrade-038000" | sudo tee /etc/hostname
	I0222 21:01:04.817791   13749 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-038000
	
	I0222 21:01:04.817901   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:04.882479   13749 main.go:141] libmachine: Using SSH client type: native
	I0222 21:01:04.882849   13749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52481 <nil> <nil>}
	I0222 21:01:04.882863   13749 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-038000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-038000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-038000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 21:01:05.016718   13749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:01:05.016737   13749 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 21:01:05.016764   13749 ubuntu.go:177] setting up certificates
	I0222 21:01:05.016774   13749 provision.go:83] configureAuth start
	I0222 21:01:05.016846   13749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-038000
	I0222 21:01:05.076787   13749 provision.go:138] copyHostCerts
	I0222 21:01:05.076889   13749 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 21:01:05.076898   13749 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 21:01:05.077002   13749 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 21:01:05.077194   13749 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 21:01:05.077202   13749 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 21:01:05.077268   13749 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 21:01:05.077431   13749 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 21:01:05.077437   13749 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 21:01:05.077498   13749 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 21:01:05.077621   13749 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-038000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-038000]
	I0222 21:01:05.379646   13749 provision.go:172] copyRemoteCerts
	I0222 21:01:05.379713   13749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 21:01:05.379765   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:05.438087   13749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52481 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:01:05.533274   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 21:01:05.550612   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0222 21:01:05.567664   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0222 21:01:05.584675   13749 provision.go:86] duration metric: configureAuth took 567.901161ms
	I0222 21:01:05.584688   13749 ubuntu.go:193] setting minikube options for container-runtime
	I0222 21:01:05.584832   13749 config.go:182] Loaded profile config "kubernetes-upgrade-038000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0222 21:01:05.584894   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:05.643619   13749 main.go:141] libmachine: Using SSH client type: native
	I0222 21:01:05.644000   13749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52481 <nil> <nil>}
	I0222 21:01:05.644014   13749 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 21:01:05.778418   13749 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 21:01:05.778436   13749 ubuntu.go:71] root file system type: overlay
	I0222 21:01:05.778538   13749 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 21:01:05.778625   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:05.837310   13749 main.go:141] libmachine: Using SSH client type: native
	I0222 21:01:05.837654   13749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52481 <nil> <nil>}
	I0222 21:01:05.837703   13749 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 21:01:05.982898   13749 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 21:01:05.983012   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:06.044149   13749 main.go:141] libmachine: Using SSH client type: native
	I0222 21:01:06.044529   13749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52481 <nil> <nil>}
	I0222 21:01:06.044543   13749 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 21:01:06.716367   13749 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 05:01:05.980783165 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0222 21:01:06.716388   13749 machine.go:91] provisioned docker machine in 2.10333124s
	I0222 21:01:06.716394   13749 client.go:171] LocalClient.Create took 10.155631009s
	I0222 21:01:06.716412   13749 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-038000" took 10.155706813s
	I0222 21:01:06.716422   13749 start.go:300] post-start starting for "kubernetes-upgrade-038000" (driver="docker")
	I0222 21:01:06.716428   13749 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 21:01:06.716501   13749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 21:01:06.716568   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:06.777937   13749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52481 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:01:06.872782   13749 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 21:01:06.876423   13749 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 21:01:06.876439   13749 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 21:01:06.876451   13749 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 21:01:06.876456   13749 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 21:01:06.876466   13749 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 21:01:06.876565   13749 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 21:01:06.876744   13749 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 21:01:06.876936   13749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 21:01:06.884160   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:01:06.901294   13749 start.go:303] post-start completed in 184.860017ms
	I0222 21:01:06.901845   13749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-038000
	I0222 21:01:06.962240   13749 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/config.json ...
	I0222 21:01:06.962684   13749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 21:01:06.962737   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:07.020380   13749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52481 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:01:07.113571   13749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 21:01:07.118123   13749 start.go:128] duration metric: createHost completed in 10.599519988s
	I0222 21:01:07.118148   13749 start.go:83] releasing machines lock for "kubernetes-upgrade-038000", held for 10.599682114s
	I0222 21:01:07.118244   13749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-038000
	I0222 21:01:07.176999   13749 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0222 21:01:07.176999   13749 ssh_runner.go:195] Run: cat /version.json
	I0222 21:01:07.177091   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:07.177092   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:07.242499   13749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52481 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:01:07.242553   13749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52481 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:01:07.613592   13749 ssh_runner.go:195] Run: systemctl --version
	I0222 21:01:07.618389   13749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 21:01:07.623225   13749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 21:01:07.643993   13749 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 21:01:07.644069   13749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0222 21:01:07.658615   13749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0222 21:01:07.666337   13749 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0222 21:01:07.666358   13749 start.go:485] detecting cgroup driver to use...
	I0222 21:01:07.666376   13749 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:01:07.666469   13749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:01:07.679967   13749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0222 21:01:07.689060   13749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 21:01:07.697601   13749 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 21:01:07.697662   13749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 21:01:07.706157   13749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:01:07.714786   13749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 21:01:07.723431   13749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:01:07.732222   13749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 21:01:07.740119   13749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 21:01:07.748629   13749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 21:01:07.756057   13749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 21:01:07.763444   13749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:01:07.831766   13749 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 21:01:07.906811   13749 start.go:485] detecting cgroup driver to use...
	I0222 21:01:07.906830   13749 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:01:07.906912   13749 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 21:01:07.917959   13749 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 21:01:07.918026   13749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 21:01:07.930692   13749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:01:07.946117   13749 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 21:01:08.042864   13749 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 21:01:08.142851   13749 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 21:01:08.142870   13749 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 21:01:08.157035   13749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:01:08.246946   13749 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 21:01:08.469638   13749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:01:08.497217   13749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:01:08.564805   13749 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0222 21:01:08.564953   13749 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-038000 dig +short host.docker.internal
	I0222 21:01:08.681447   13749 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 21:01:08.681560   13749 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 21:01:08.686088   13749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:01:08.696089   13749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:01:08.753943   13749 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 21:01:08.754052   13749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:01:08.773683   13749 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0222 21:01:08.773701   13749 docker.go:560] Images already preloaded, skipping extraction
	I0222 21:01:08.773801   13749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:01:08.793019   13749 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0222 21:01:08.793032   13749 cache_images.go:84] Images are preloaded, skipping loading
	I0222 21:01:08.793124   13749 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 21:01:08.819575   13749 cni.go:84] Creating CNI manager for ""
	I0222 21:01:08.819600   13749 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0222 21:01:08.819616   13749 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 21:01:08.819634   13749 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-038000 NodeName:kubernetes-upgrade-038000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 21:01:08.819755   13749 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-038000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-038000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 21:01:08.819839   13749 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-038000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-038000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 21:01:08.819920   13749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0222 21:01:08.828086   13749 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 21:01:08.828151   13749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 21:01:08.835832   13749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0222 21:01:08.848710   13749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 21:01:08.861765   13749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0222 21:01:08.874845   13749 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0222 21:01:08.878901   13749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:01:08.888801   13749 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000 for IP: 192.168.76.2
	I0222 21:01:08.888824   13749 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:01:08.889016   13749 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 21:01:08.889080   13749 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 21:01:08.889123   13749 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.key
	I0222 21:01:08.889141   13749 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.crt with IP's: []
	I0222 21:01:09.070146   13749 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.crt ...
	I0222 21:01:09.070162   13749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.crt: {Name:mk8d51556866323c5a02299076a7e0479d0495ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:01:09.070533   13749 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.key ...
	I0222 21:01:09.070541   13749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.key: {Name:mk3d13fe5d19dd3b6cd1b70d2fd38fc92679dbd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:01:09.070732   13749 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.key.31bdca25
	I0222 21:01:09.070746   13749 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0222 21:01:09.267309   13749 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.crt.31bdca25 ...
	I0222 21:01:09.267322   13749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.crt.31bdca25: {Name:mk3f3a80c7241cba75901873c1a6eb22f9f1b9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:01:09.267723   13749 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.key.31bdca25 ...
	I0222 21:01:09.267733   13749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.key.31bdca25: {Name:mk511c66f12cd554666d0be0c9c9818ae0677153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:01:09.267953   13749 certs.go:333] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.crt
	I0222 21:01:09.268103   13749 certs.go:337] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.key
	I0222 21:01:09.268262   13749 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.key
	I0222 21:01:09.268276   13749 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.crt with IP's: []
	I0222 21:01:09.378111   13749 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.crt ...
	I0222 21:01:09.378122   13749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.crt: {Name:mkaf7cb8a566fa36a2dba7c6d76e79e8594d8eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:01:09.378348   13749 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.key ...
	I0222 21:01:09.378356   13749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.key: {Name:mk46035af3b0fe7c768dd47af0f8ccc1735649db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:01:09.378728   13749 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 21:01:09.378774   13749 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 21:01:09.378785   13749 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 21:01:09.378818   13749 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 21:01:09.378847   13749 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 21:01:09.378879   13749 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 21:01:09.378949   13749 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:01:09.379408   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 21:01:09.398637   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0222 21:01:09.415870   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 21:01:09.433035   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0222 21:01:09.450155   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 21:01:09.467084   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 21:01:09.484224   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 21:01:09.501377   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 21:01:09.519084   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 21:01:09.536626   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 21:01:09.554098   13749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 21:01:09.571468   13749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 21:01:09.584372   13749 ssh_runner.go:195] Run: openssl version
	I0222 21:01:09.590922   13749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 21:01:09.599540   13749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 21:01:09.603509   13749 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 21:01:09.603556   13749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 21:01:09.609276   13749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 21:01:09.617876   13749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 21:01:09.626418   13749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:01:09.630484   13749 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:01:09.630539   13749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:01:09.635986   13749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 21:01:09.644247   13749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 21:01:09.652727   13749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 21:01:09.657064   13749 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 21:01:09.657129   13749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 21:01:09.663373   13749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 21:01:09.671965   13749 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-038000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP:}
	I0222 21:01:09.672069   13749 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:01:09.691025   13749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 21:01:09.699445   13749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:01:09.706997   13749 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:01:09.707050   13749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:01:09.714706   13749 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:01:09.714730   13749 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:01:09.764073   13749 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0222 21:01:09.764115   13749 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:01:09.932600   13749 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:01:09.932693   13749 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:01:09.932827   13749 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:01:10.084360   13749 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:01:10.085101   13749 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:01:10.091605   13749 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0222 21:01:10.160686   13749 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:01:10.182411   13749 out.go:204]   - Generating certificates and keys ...
	I0222 21:01:10.182566   13749 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:01:10.182624   13749 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:01:10.338495   13749 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0222 21:01:10.385681   13749 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0222 21:01:10.533772   13749 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0222 21:01:10.670048   13749 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0222 21:01:10.726854   13749 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0222 21:01:10.727024   13749 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-038000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0222 21:01:10.875762   13749 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0222 21:01:10.875899   13749 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-038000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0222 21:01:11.048835   13749 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0222 21:01:11.185911   13749 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0222 21:01:11.303113   13749 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0222 21:01:11.303177   13749 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:01:11.396284   13749 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:01:11.495132   13749 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:01:11.629969   13749 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:01:11.717237   13749 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:01:11.717851   13749 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:01:11.760264   13749 out.go:204]   - Booting up control plane ...
	I0222 21:01:11.760458   13749 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:01:11.760592   13749 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:01:11.760729   13749 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:01:11.760869   13749 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:01:11.761124   13749 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:01:51.726305   13749 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 21:01:51.727743   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:01:51.728047   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:01:56.729529   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:01:56.729753   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:02:06.731349   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:02:06.731574   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:02:26.731465   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:02:26.731641   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:03:06.733249   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:03:06.733502   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:03:06.733527   13749 kubeadm.go:322] 
	I0222 21:03:06.733586   13749 kubeadm.go:322] Unfortunately, an error has occurred:
	I0222 21:03:06.733636   13749 kubeadm.go:322] 	timed out waiting for the condition
	I0222 21:03:06.733642   13749 kubeadm.go:322] 
	I0222 21:03:06.733694   13749 kubeadm.go:322] This error is likely caused by:
	I0222 21:03:06.733731   13749 kubeadm.go:322] 	- The kubelet is not running
	I0222 21:03:06.733875   13749 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 21:03:06.733889   13749 kubeadm.go:322] 
	I0222 21:03:06.733996   13749 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 21:03:06.734105   13749 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0222 21:03:06.734180   13749 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0222 21:03:06.734198   13749 kubeadm.go:322] 
	I0222 21:03:06.734334   13749 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 21:03:06.734483   13749 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0222 21:03:06.734570   13749 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0222 21:03:06.734611   13749 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0222 21:03:06.734681   13749 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0222 21:03:06.734724   13749 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0222 21:03:06.736577   13749 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 21:03:06.736650   13749 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 21:03:06.736754   13749 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0222 21:03:06.736828   13749 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:03:06.736924   13749 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 21:03:06.737014   13749 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0222 21:03:06.737173   13749 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-038000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-038000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-038000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-038000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0222 21:03:06.737206   13749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0222 21:03:07.193919   13749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:03:07.204218   13749 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:03:07.204279   13749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:03:07.212062   13749 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:03:07.212105   13749 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:03:07.270489   13749 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0222 21:03:07.270726   13749 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:03:07.446068   13749 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:03:07.446162   13749 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:03:07.446266   13749 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:03:07.604861   13749 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:03:07.605622   13749 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:03:07.611954   13749 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0222 21:03:07.678643   13749 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:03:07.700999   13749 out.go:204]   - Generating certificates and keys ...
	I0222 21:03:07.701086   13749 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:03:07.701154   13749 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:03:07.701226   13749 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0222 21:03:07.701305   13749 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0222 21:03:07.701397   13749 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0222 21:03:07.701468   13749 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0222 21:03:07.701558   13749 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0222 21:03:07.701655   13749 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0222 21:03:07.701771   13749 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0222 21:03:07.701844   13749 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0222 21:03:07.701878   13749 kubeadm.go:322] [certs] Using the existing "sa" key
	I0222 21:03:07.701943   13749 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:03:07.892486   13749 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:03:08.035255   13749 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:03:08.180984   13749 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:03:08.287446   13749 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:03:08.287985   13749 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:03:08.310215   13749 out.go:204]   - Booting up control plane ...
	I0222 21:03:08.310400   13749 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:03:08.310575   13749 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:03:08.310728   13749 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:03:08.310898   13749 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:03:08.311178   13749 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:03:48.296730   13749 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 21:03:48.297513   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:03:48.297750   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:03:53.298286   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:03:53.298553   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:04:03.300458   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:04:03.300684   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:04:23.300137   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:04:23.300297   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:05:03.300517   13749 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:05:03.300668   13749 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:05:03.300676   13749 kubeadm.go:322] 
	I0222 21:05:03.300717   13749 kubeadm.go:322] Unfortunately, an error has occurred:
	I0222 21:05:03.300750   13749 kubeadm.go:322] 	timed out waiting for the condition
	I0222 21:05:03.300753   13749 kubeadm.go:322] 
	I0222 21:05:03.300779   13749 kubeadm.go:322] This error is likely caused by:
	I0222 21:05:03.300802   13749 kubeadm.go:322] 	- The kubelet is not running
	I0222 21:05:03.300898   13749 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 21:05:03.300908   13749 kubeadm.go:322] 
	I0222 21:05:03.301029   13749 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 21:05:03.301066   13749 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0222 21:05:03.301099   13749 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0222 21:05:03.301122   13749 kubeadm.go:322] 
	I0222 21:05:03.301238   13749 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 21:05:03.301330   13749 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0222 21:05:03.301399   13749 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0222 21:05:03.301439   13749 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0222 21:05:03.301503   13749 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0222 21:05:03.301533   13749 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0222 21:05:03.304464   13749 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 21:05:03.304546   13749 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 21:05:03.304638   13749 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0222 21:05:03.304716   13749 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:05:03.304806   13749 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 21:05:03.304878   13749 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0222 21:05:03.304915   13749 kubeadm.go:403] StartCluster complete in 3m53.637693766s
	I0222 21:05:03.305013   13749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:05:03.327217   13749 logs.go:278] 0 containers: []
	W0222 21:05:03.327233   13749 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:05:03.327291   13749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:05:03.348417   13749 logs.go:278] 0 containers: []
	W0222 21:05:03.348428   13749 logs.go:280] No container was found matching "etcd"
	I0222 21:05:03.348491   13749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:05:03.371611   13749 logs.go:278] 0 containers: []
	W0222 21:05:03.371635   13749 logs.go:280] No container was found matching "coredns"
	I0222 21:05:03.371734   13749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:05:03.393710   13749 logs.go:278] 0 containers: []
	W0222 21:05:03.393727   13749 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:05:03.393807   13749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:05:03.414810   13749 logs.go:278] 0 containers: []
	W0222 21:05:03.414822   13749 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:05:03.414883   13749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:05:03.434917   13749 logs.go:278] 0 containers: []
	W0222 21:05:03.434955   13749 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:05:03.435078   13749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:05:03.453759   13749 logs.go:278] 0 containers: []
	W0222 21:05:03.453772   13749 logs.go:280] No container was found matching "kindnet"
	I0222 21:05:03.453864   13749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:05:03.474530   13749 logs.go:278] 0 containers: []
	W0222 21:05:03.474543   13749 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:05:03.474551   13749 logs.go:124] Gathering logs for Docker ...
	I0222 21:05:03.474558   13749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:05:03.502762   13749 logs.go:124] Gathering logs for container status ...
	I0222 21:05:03.502782   13749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:05:05.552074   13749 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049307744s)
	I0222 21:05:05.552310   13749 logs.go:124] Gathering logs for kubelet ...
	I0222 21:05:05.552326   13749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:05:05.596924   13749 logs.go:124] Gathering logs for dmesg ...
	I0222 21:05:05.596942   13749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:05:05.612427   13749 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:05:05.612443   13749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:05:05.674238   13749 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0222 21:05:05.674262   13749 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0222 21:05:05.674291   13749 out.go:239] * 
	* 
	W0222 21:05:05.674461   13749 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 21:05:05.674475   13749 out.go:239] * 
	* 
	W0222 21:05:05.675166   13749 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0222 21:05:05.761776   13749 out.go:177] 
	W0222 21:05:05.804007   13749 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 21:05:05.804088   13749 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0222 21:05:05.804119   13749 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0222 21:05:05.826707   13749 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-038000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-038000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-038000: (1.68792864s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-038000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-038000 status --format={{.Host}}: exit status 7 (104.531389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-038000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-038000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (4m39.462787473s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-038000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-038000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-038000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (837.012938ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-038000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-038000
	    minikube start -p kubernetes-upgrade-038000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0380002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-038000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-038000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 
E0222 21:10:03.115664    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-038000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (44.19027331s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-02-22 21:10:32.291009 -0800 PST m=+2922.593317295
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-038000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-038000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d829e52cc4aacf9ef13965460cc6b10896b4b393a3fe90b2688a9c4902f880e5",
	        "Created": "2023-02-23T05:01:03.551781695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196644,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T05:05:09.128225565Z",
	            "FinishedAt": "2023-02-23T05:05:06.502855695Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/d829e52cc4aacf9ef13965460cc6b10896b4b393a3fe90b2688a9c4902f880e5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d829e52cc4aacf9ef13965460cc6b10896b4b393a3fe90b2688a9c4902f880e5/hostname",
	        "HostsPath": "/var/lib/docker/containers/d829e52cc4aacf9ef13965460cc6b10896b4b393a3fe90b2688a9c4902f880e5/hosts",
	        "LogPath": "/var/lib/docker/containers/d829e52cc4aacf9ef13965460cc6b10896b4b393a3fe90b2688a9c4902f880e5/d829e52cc4aacf9ef13965460cc6b10896b4b393a3fe90b2688a9c4902f880e5-json.log",
	        "Name": "/kubernetes-upgrade-038000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-038000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-038000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2667282160c3156c883462a8e96b820f6d36d8c3b8f12524be96ddb14084fb68-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2667282160c3156c883462a8e96b820f6d36d8c3b8f12524be96ddb14084fb68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2667282160c3156c883462a8e96b820f6d36d8c3b8f12524be96ddb14084fb68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2667282160c3156c883462a8e96b820f6d36d8c3b8f12524be96ddb14084fb68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-038000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-038000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-038000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-038000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-038000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "770907fdc114107915a2c6bd7c3be23260ecd0061e1db58c2a937e9536ea3560",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52697"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52698"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52699"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52695"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52696"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/770907fdc114",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-038000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d829e52cc4aa",
	                        "kubernetes-upgrade-038000"
	                    ],
	                    "NetworkID": "098d8db2c6c101beed744bf5f79247f7836f665fce862070d8fbbfb82df59a38",
	                    "EndpointID": "998a5bc55cb5852ed710bc86de0cba5644975d20dafd4c4e625684e4e312d875",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-038000 -n kubernetes-upgrade-038000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-038000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-038000 logs -n 25: (3.406894747s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo cat                           | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo cat                           | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo cat                           | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo docker                        | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo cat                           | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo cat                           | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo cat                           | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo cat                           | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo                               | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo find                          | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-310000 sudo crio                          | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-310000                                    | kindnet-310000            | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:09 PST |
	| start   | -p calico-310000 --memory=3072                       | calico-310000             | jenkins | v1.29.0 | 22 Feb 23 21:09 PST |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=calico --driver=docker                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-038000                         | kubernetes-upgrade-038000 | jenkins | v1.29.0 | 22 Feb 23 21:09 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-038000                         | kubernetes-upgrade-038000 | jenkins | v1.29.0 | 22 Feb 23 21:09 PST | 22 Feb 23 21:10 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/22 21:09:48
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0222 21:09:48.149240   16663 out.go:296] Setting OutFile to fd 1 ...
	I0222 21:09:48.149396   16663 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:09:48.149401   16663 out.go:309] Setting ErrFile to fd 2...
	I0222 21:09:48.149405   16663 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:09:48.149528   16663 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 21:09:48.151047   16663 out.go:303] Setting JSON to false
	I0222 21:09:48.171162   16663 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4163,"bootTime":1677124825,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 21:09:48.171273   16663 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 21:09:48.211538   16663 out.go:177] * [kubernetes-upgrade-038000] minikube v1.29.0 on Darwin 13.2
	I0222 21:09:48.286728   16663 notify.go:220] Checking for updates...
	I0222 21:09:48.323612   16663 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 21:09:48.344405   16663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:09:48.365518   16663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 21:09:48.386631   16663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 21:09:48.444433   16663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 21:09:48.486334   16663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 21:09:48.507845   16663 config.go:182] Loaded profile config "kubernetes-upgrade-038000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:09:48.508202   16663 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 21:09:48.579088   16663 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 21:09:48.579236   16663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:09:48.742805   16663 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:false NGoroutines:61 SystemTime:2023-02-23 05:09:48.638597564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:09:48.764823   16663 out.go:177] * Using the docker driver based on existing profile
	I0222 21:09:48.347662   16538 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.034587504s)
	I0222 21:09:48.347730   16538 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:09:48.423783   16538 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0222 21:09:48.489559   16538 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:09:48.565047   16538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:09:48.647200   16538 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0222 21:09:48.669495   16538 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0222 21:09:48.669580   16538 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0222 21:09:48.674422   16538 start.go:553] Will wait 60s for crictl version
	I0222 21:09:48.674477   16538 ssh_runner.go:195] Run: which crictl
	I0222 21:09:48.678602   16538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0222 21:09:48.781255   16538 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0222 21:09:48.781348   16538 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:09:48.809364   16538 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:09:48.786161   16663 start.go:296] selected driver: docker
	I0222 21:09:48.786178   16663 start.go:857] validating driver "docker" against &{Name:kubernetes-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-038000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:09:48.786258   16663 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 21:09:48.789118   16663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:09:49.009999   16663 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:false NGoroutines:61 SystemTime:2023-02-23 05:09:48.901072999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:09:49.010205   16663 cni.go:84] Creating CNI manager for ""
	I0222 21:09:49.010225   16663 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:09:49.010238   16663 start_flags.go:319] config:
	{Name:kubernetes-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-038000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:09:49.052857   16663 out.go:177] * Starting control plane node kubernetes-upgrade-038000 in cluster kubernetes-upgrade-038000
	I0222 21:09:49.073980   16663 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 21:09:49.095713   16663 out.go:177] * Pulling base image ...
	I0222 21:09:49.116963   16663 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 21:09:49.117080   16663 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0222 21:09:49.117110   16663 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 21:09:49.117120   16663 cache.go:57] Caching tarball of preloaded images
	I0222 21:09:49.117417   16663 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 21:09:49.117443   16663 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0222 21:09:49.118505   16663 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/config.json ...
	I0222 21:09:49.180269   16663 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 21:09:49.180288   16663 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 21:09:49.180330   16663 cache.go:193] Successfully downloaded all kic artifacts
	I0222 21:09:49.180408   16663 start.go:364] acquiring machines lock for kubernetes-upgrade-038000: {Name:mk53bee5973f8cb285d9d9235307ee3ee077de7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 21:09:49.180527   16663 start.go:368] acquired machines lock for "kubernetes-upgrade-038000" in 97.813µs
	I0222 21:09:49.180557   16663 start.go:96] Skipping create...Using existing machine configuration
	I0222 21:09:49.180565   16663 fix.go:55] fixHost starting: 
	I0222 21:09:49.180827   16663 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-038000 --format={{.State.Status}}
	I0222 21:09:49.244323   16663 fix.go:103] recreateIfNeeded on kubernetes-upgrade-038000: state=Running err=<nil>
	W0222 21:09:49.244371   16663 fix.go:129] unexpected machine state, will restart: <nil>
	I0222 21:09:49.288165   16663 out.go:177] * Updating the running docker "kubernetes-upgrade-038000" container ...
	I0222 21:09:48.861438   16538 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0222 21:09:48.861552   16538 cli_runner.go:164] Run: docker exec -t calico-310000 dig +short host.docker.internal
	I0222 21:09:48.983493   16538 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 21:09:48.983607   16538 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 21:09:48.988477   16538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:09:49.000428   16538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-310000
	I0222 21:09:49.125166   16538 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 21:09:49.125283   16538 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:09:49.148055   16538 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0222 21:09:49.148074   16538 docker.go:560] Images already preloaded, skipping extraction
	I0222 21:09:49.148167   16538 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:09:49.170626   16538 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0222 21:09:49.170641   16538 cache_images.go:84] Images are preloaded, skipping loading
	I0222 21:09:49.170731   16538 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 21:09:49.200782   16538 cni.go:84] Creating CNI manager for "calico"
	I0222 21:09:49.200811   16538 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 21:09:49.200834   16538 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-310000 NodeName:calico-310000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 21:09:49.200962   16538 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "calico-310000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 21:09:49.201045   16538 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-310000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:calico-310000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0222 21:09:49.201180   16538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0222 21:09:49.210639   16538 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 21:09:49.210701   16538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 21:09:49.219616   16538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (445 bytes)
	I0222 21:09:49.234935   16538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 21:09:49.249955   16538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2089 bytes)
	I0222 21:09:49.263605   16538 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0222 21:09:49.267624   16538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:09:49.277672   16538 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000 for IP: 192.168.67.2
	I0222 21:09:49.277689   16538 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:09:49.277948   16538 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 21:09:49.278033   16538 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 21:09:49.278085   16538 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.key
	I0222 21:09:49.278099   16538 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt with IP's: []
	I0222 21:09:49.521891   16538 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt ...
	I0222 21:09:49.521907   16538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: {Name:mk3da1bd173fb96f7c601e477e1cc03aa7e2a61c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:09:49.522272   16538 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.key ...
	I0222 21:09:49.522282   16538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.key: {Name:mkc46e74a3562e36ef6949a91fd93f9959d85fc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:09:49.522524   16538 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.key.c7fa3a9e
	I0222 21:09:49.522542   16538 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0222 21:09:49.680029   16538 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.crt.c7fa3a9e ...
	I0222 21:09:49.680044   16538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.crt.c7fa3a9e: {Name:mk3e0095a9110b730c534cc342febe1c2752c5a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:09:49.680383   16538 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.key.c7fa3a9e ...
	I0222 21:09:49.680401   16538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.key.c7fa3a9e: {Name:mk41c679f91557d40fcbd0a2cc03725973cdff13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:09:49.680632   16538 certs.go:333] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.crt
	I0222 21:09:49.680828   16538 certs.go:337] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.key
	I0222 21:09:49.681008   16538 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/proxy-client.key
	I0222 21:09:49.681026   16538 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/proxy-client.crt with IP's: []
	I0222 21:09:49.831926   16538 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/proxy-client.crt ...
	I0222 21:09:49.831939   16538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/proxy-client.crt: {Name:mk7185ebbc1c69a89040f90629c5d767b1b459ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:09:49.832211   16538 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/proxy-client.key ...
	I0222 21:09:49.832219   16538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/proxy-client.key: {Name:mk898c0177499165741685c8a6295853f43f07b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:09:49.832653   16538 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 21:09:49.832706   16538 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 21:09:49.832719   16538 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 21:09:49.832758   16538 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 21:09:49.832794   16538 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 21:09:49.832830   16538 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 21:09:49.832904   16538 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:09:49.833435   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 21:09:49.853981   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0222 21:09:49.874500   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 21:09:49.894448   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0222 21:09:49.914654   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 21:09:49.934765   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 21:09:49.955592   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 21:09:49.975893   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 21:09:49.994845   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 21:09:50.013303   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 21:09:50.030976   16538 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 21:09:50.048749   16538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 21:09:50.062511   16538 ssh_runner.go:195] Run: openssl version
	I0222 21:09:50.069507   16538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 21:09:50.078132   16538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 21:09:50.082825   16538 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 21:09:50.082885   16538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 21:09:50.088866   16538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 21:09:50.097266   16538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 21:09:50.106255   16538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:09:50.110404   16538 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:09:50.110459   16538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:09:50.116362   16538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 21:09:50.125052   16538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 21:09:50.134676   16538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 21:09:50.139627   16538 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 21:09:50.139691   16538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 21:09:50.145867   16538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 21:09:50.155018   16538 kubeadm.go:401] StartCluster: {Name:calico-310000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:calico-310000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:09:50.155164   16538 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:09:50.178084   16538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 21:09:50.187369   16538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:09:50.195615   16538 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:09:50.195677   16538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:09:50.203879   16538 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:09:50.203905   16538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:09:50.255223   16538 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0222 21:09:50.255300   16538 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:09:50.368086   16538 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:09:50.368229   16538 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:09:50.368371   16538 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:09:50.510964   16538 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:09:50.555396   16538 out.go:204]   - Generating certificates and keys ...
	I0222 21:09:50.555509   16538 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:09:50.555649   16538 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:09:50.649167   16538 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0222 21:09:50.778396   16538 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0222 21:09:50.930585   16538 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0222 21:09:50.993094   16538 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0222 21:09:51.202063   16538 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0222 21:09:51.202239   16538 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [calico-310000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0222 21:09:51.354486   16538 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0222 21:09:51.354588   16538 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [calico-310000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0222 21:09:51.419707   16538 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0222 21:09:51.645045   16538 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0222 21:09:52.425973   16538 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0222 21:09:52.426029   16538 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:09:52.568460   16538 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:09:52.656780   16538 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:09:52.757535   16538 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:09:52.888306   16538 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:09:52.901518   16538 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:09:52.902184   16538 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:09:52.902220   16538 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0222 21:09:52.984971   16538 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:09:49.309335   16663 machine.go:88] provisioning docker machine ...
	I0222 21:09:49.309376   16663 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-038000"
	I0222 21:09:49.309457   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:49.379272   16663 main.go:141] libmachine: Using SSH client type: native
	I0222 21:09:49.379771   16663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52697 <nil> <nil>}
	I0222 21:09:49.379784   16663 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-038000 && echo "kubernetes-upgrade-038000" | sudo tee /etc/hostname
	I0222 21:09:49.527815   16663 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-038000
	
	I0222 21:09:49.527909   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:49.597217   16663 main.go:141] libmachine: Using SSH client type: native
	I0222 21:09:49.597632   16663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52697 <nil> <nil>}
	I0222 21:09:49.597648   16663 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-038000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-038000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-038000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 21:09:49.737380   16663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:09:49.737409   16663 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 21:09:49.737425   16663 ubuntu.go:177] setting up certificates
	I0222 21:09:49.737436   16663 provision.go:83] configureAuth start
	I0222 21:09:49.737532   16663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-038000
	I0222 21:09:49.800633   16663 provision.go:138] copyHostCerts
	I0222 21:09:49.800746   16663 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 21:09:49.800758   16663 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 21:09:49.800869   16663 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 21:09:49.801081   16663 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 21:09:49.801088   16663 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 21:09:49.801151   16663 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 21:09:49.801315   16663 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 21:09:49.801321   16663 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 21:09:49.801382   16663 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 21:09:49.801510   16663 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-038000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-038000]
	I0222 21:09:49.910134   16663 provision.go:172] copyRemoteCerts
	I0222 21:09:49.910213   16663 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 21:09:49.910273   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:49.974632   16663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52697 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:09:50.068700   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 21:09:50.087411   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0222 21:09:50.106245   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0222 21:09:50.125015   16663 provision.go:86] duration metric: configureAuth took 387.529041ms
	I0222 21:09:50.125034   16663 ubuntu.go:193] setting minikube options for container-runtime
	I0222 21:09:50.125256   16663 config.go:182] Loaded profile config "kubernetes-upgrade-038000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:09:50.125324   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:50.189293   16663 main.go:141] libmachine: Using SSH client type: native
	I0222 21:09:50.189654   16663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52697 <nil> <nil>}
	I0222 21:09:50.189664   16663 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 21:09:50.322032   16663 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 21:09:50.322049   16663 ubuntu.go:71] root file system type: overlay
	I0222 21:09:50.322151   16663 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 21:09:50.322237   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:50.389582   16663 main.go:141] libmachine: Using SSH client type: native
	I0222 21:09:50.389940   16663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52697 <nil> <nil>}
	I0222 21:09:50.389998   16663 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 21:09:50.538456   16663 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 21:09:50.538547   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:50.603824   16663 main.go:141] libmachine: Using SSH client type: native
	I0222 21:09:50.604221   16663 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 52697 <nil> <nil>}
	I0222 21:09:50.604236   16663 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 21:09:50.744592   16663 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:09:50.744611   16663 machine.go:91] provisioned docker machine in 1.435295725s
	I0222 21:09:50.744623   16663 start.go:300] post-start starting for "kubernetes-upgrade-038000" (driver="docker")
	I0222 21:09:50.744629   16663 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 21:09:50.744720   16663 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 21:09:50.744771   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:50.807034   16663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52697 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:09:50.904031   16663 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 21:09:50.908296   16663 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 21:09:50.908318   16663 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 21:09:50.908325   16663 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 21:09:50.908330   16663 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 21:09:50.908338   16663 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 21:09:50.908436   16663 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 21:09:50.908616   16663 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 21:09:50.908813   16663 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 21:09:50.917099   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:09:50.936866   16663 start.go:303] post-start completed in 192.236886ms
	I0222 21:09:50.936952   16663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 21:09:50.937043   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:51.001682   16663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52697 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:09:51.094267   16663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 21:09:51.100056   16663 fix.go:57] fixHost completed within 1.919525041s
	I0222 21:09:51.100073   16663 start.go:83] releasing machines lock for "kubernetes-upgrade-038000", held for 1.919576137s
	I0222 21:09:51.100163   16663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-038000
	I0222 21:09:51.164335   16663 ssh_runner.go:195] Run: cat /version.json
	I0222 21:09:51.164364   16663 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0222 21:09:51.164425   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:51.164484   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:51.234305   16663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52697 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:09:51.234525   16663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52697 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:09:51.383375   16663 ssh_runner.go:195] Run: systemctl --version
	I0222 21:09:51.388941   16663 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0222 21:09:51.394121   16663 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0222 21:09:51.394180   16663 ssh_runner.go:195] Run: which cri-dockerd
	I0222 21:09:51.398623   16663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0222 21:09:51.406314   16663 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0222 21:09:51.420714   16663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0222 21:09:51.430523   16663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0222 21:09:51.438817   16663 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0222 21:09:51.438863   16663 start.go:485] detecting cgroup driver to use...
	I0222 21:09:51.438877   16663 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:09:51.438996   16663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:09:51.453830   16663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0222 21:09:51.463685   16663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 21:09:51.473602   16663 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 21:09:51.473668   16663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 21:09:51.483354   16663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:09:51.493125   16663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 21:09:51.502636   16663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:09:51.511827   16663 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 21:09:51.520384   16663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 21:09:51.529980   16663 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 21:09:51.538299   16663 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 21:09:51.546540   16663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:09:51.630131   16663 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 21:09:55.103320   16663 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (3.473242037s)
	I0222 21:09:55.103339   16663 start.go:485] detecting cgroup driver to use...
	I0222 21:09:55.103350   16663 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:09:55.103419   16663 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 21:09:55.116152   16663 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 21:09:55.116231   16663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 21:09:55.127268   16663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:09:55.142598   16663 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 21:09:55.234727   16663 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 21:09:55.425073   16663 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 21:09:55.425097   16663 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 21:09:55.501600   16663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:09:55.605162   16663 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 21:09:56.250729   16663 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:09:56.320657   16663 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0222 21:09:56.394584   16663 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:09:56.464549   16663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:09:56.561217   16663 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0222 21:09:56.578350   16663 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0222 21:09:56.578445   16663 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0222 21:09:56.583074   16663 start.go:553] Will wait 60s for crictl version
	I0222 21:09:56.583143   16663 ssh_runner.go:195] Run: which crictl
	I0222 21:09:56.587305   16663 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0222 21:09:56.655760   16663 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0222 21:09:56.655848   16663 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:09:56.714253   16663 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:09:53.005114   16538 out.go:204]   - Booting up control plane ...
	I0222 21:09:53.005219   16538 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:09:53.005321   16538 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:09:53.005390   16538 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:09:53.005488   16538 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:09:53.005689   16538 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:09:56.853701   16663 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0222 21:09:56.853919   16663 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-038000 dig +short host.docker.internal
	I0222 21:09:56.972179   16663 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 21:09:56.972308   16663 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 21:09:56.977113   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:57.037285   16663 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 21:09:57.037368   16663 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:09:57.057720   16663 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0222 21:09:57.057738   16663 docker.go:560] Images already preloaded, skipping extraction
	I0222 21:09:57.057840   16663 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:09:57.079855   16663 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0222 21:09:57.079870   16663 cache_images.go:84] Images are preloaded, skipping loading
	I0222 21:09:57.079954   16663 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 21:09:57.107823   16663 cni.go:84] Creating CNI manager for ""
	I0222 21:09:57.107842   16663 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:09:57.107858   16663 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 21:09:57.107875   16663 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-038000 NodeName:kubernetes-upgrade-038000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 21:09:57.107995   16663 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-038000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 21:09:57.108073   16663 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-038000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-038000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 21:09:57.108145   16663 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0222 21:09:57.116356   16663 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 21:09:57.116413   16663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 21:09:57.124372   16663 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (457 bytes)
	I0222 21:09:57.138206   16663 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 21:09:57.150977   16663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0222 21:09:57.164575   16663 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0222 21:09:57.168633   16663 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000 for IP: 192.168.76.2
	I0222 21:09:57.168648   16663 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:09:57.168808   16663 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 21:09:57.168856   16663 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 21:09:57.168944   16663 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.key
	I0222 21:09:57.169025   16663 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.key.31bdca25
	I0222 21:09:57.169086   16663 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.key
	I0222 21:09:57.169313   16663 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 21:09:57.169356   16663 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 21:09:57.169368   16663 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 21:09:57.169405   16663 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 21:09:57.169440   16663 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 21:09:57.169472   16663 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 21:09:57.169567   16663 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:09:57.170127   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 21:09:57.187714   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0222 21:09:57.205630   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 21:09:57.224821   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0222 21:09:57.242360   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 21:09:57.260586   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 21:09:57.278229   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 21:09:57.295854   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 21:09:57.313780   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 21:09:57.331592   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 21:09:57.349761   16663 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 21:09:57.367461   16663 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 21:09:57.380803   16663 ssh_runner.go:195] Run: openssl version
	I0222 21:09:57.386490   16663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 21:09:57.395084   16663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:09:57.399417   16663 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:09:57.399457   16663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:09:57.405663   16663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 21:09:57.413418   16663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 21:09:57.421737   16663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 21:09:57.425886   16663 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 21:09:57.425950   16663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 21:09:57.431825   16663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 21:09:57.440097   16663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 21:09:57.449014   16663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 21:09:57.453370   16663 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 21:09:57.453420   16663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 21:09:57.459006   16663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 21:09:57.467039   16663 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-038000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:09:57.467141   16663 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:09:57.486828   16663 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 21:09:57.494926   16663 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0222 21:09:57.494944   16663 kubeadm.go:633] restartCluster start
	I0222 21:09:57.495002   16663 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0222 21:09:57.502170   16663 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:09:57.502246   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:09:57.563927   16663 kubeconfig.go:92] found "kubernetes-upgrade-038000" server: "https://127.0.0.1:52696"
	I0222 21:09:57.564560   16663 kapi.go:59] client config for kubernetes-upgrade-038000: &rest.Config{Host:"https://127.0.0.1:52696", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 21:09:57.565336   16663 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0222 21:09:57.573443   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:09:57.573494   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:09:57.582539   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:09:58.083625   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:09:58.083794   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:09:58.095000   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:09:58.584666   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:09:58.584840   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:09:58.596031   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:09:59.082941   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:09:59.083080   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:09:59.094560   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:09:59.584655   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:09:59.584813   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:09:59.596016   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:00.084336   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:10:00.084523   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:10:00.095839   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:00.584319   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:10:00.584494   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:10:00.595724   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:01.082804   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:10:01.082926   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:10:01.094083   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:01.582723   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:10:01.582884   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:10:01.593886   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:02.082549   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:10:02.082639   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:10:02.095391   16663 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:02.583629   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:10:02.583770   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:10:02.596512   16663 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12320/cgroup
	W0222 21:10:02.608093   16663 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:02.608174   16663 ssh_runner.go:195] Run: ls
	I0222 21:10:02.615355   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:06.495411   16538 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.502350 seconds
	I0222 21:10:06.495562   16538 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0222 21:10:06.556635   16538 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0222 21:10:07.072289   16538 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0222 21:10:07.072468   16538 kubeadm.go:322] [mark-control-plane] Marking the node calico-310000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0222 21:10:07.581568   16538 kubeadm.go:322] [bootstrap-token] Using token: qkp1xj.nhzyf6jrkxhhp59n
	I0222 21:10:07.618779   16538 out.go:204]   - Configuring RBAC rules ...
	I0222 21:10:07.618961   16538 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0222 21:10:07.622016   16538 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0222 21:10:07.662976   16538 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0222 21:10:07.665308   16538 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0222 21:10:07.668632   16538 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0222 21:10:07.670919   16538 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0222 21:10:07.679900   16538 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0222 21:10:07.836547   16538 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0222 21:10:08.024889   16538 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0222 21:10:08.030645   16538 kubeadm.go:322] 
	I0222 21:10:08.030742   16538 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0222 21:10:08.030768   16538 kubeadm.go:322] 
	I0222 21:10:08.030871   16538 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0222 21:10:08.030886   16538 kubeadm.go:322] 
	I0222 21:10:08.030935   16538 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0222 21:10:08.030996   16538 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0222 21:10:08.031072   16538 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0222 21:10:08.031090   16538 kubeadm.go:322] 
	I0222 21:10:08.031170   16538 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0222 21:10:08.031184   16538 kubeadm.go:322] 
	I0222 21:10:08.031262   16538 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0222 21:10:08.031273   16538 kubeadm.go:322] 
	I0222 21:10:08.031316   16538 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0222 21:10:08.031368   16538 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0222 21:10:08.031435   16538 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0222 21:10:08.031442   16538 kubeadm.go:322] 
	I0222 21:10:08.031523   16538 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0222 21:10:08.031623   16538 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0222 21:10:08.031632   16538 kubeadm.go:322] 
	I0222 21:10:08.031770   16538 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qkp1xj.nhzyf6jrkxhhp59n \
	I0222 21:10:08.031870   16538 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf \
	I0222 21:10:08.031917   16538 kubeadm.go:322] 	--control-plane 
	I0222 21:10:08.031935   16538 kubeadm.go:322] 
	I0222 21:10:08.032017   16538 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0222 21:10:08.032026   16538 kubeadm.go:322] 
	I0222 21:10:08.032088   16538 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qkp1xj.nhzyf6jrkxhhp59n \
	I0222 21:10:08.032205   16538 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf 
	I0222 21:10:08.032434   16538 kubeadm.go:322] W0223 05:09:50.248466    1295 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0222 21:10:08.032597   16538 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0222 21:10:08.032715   16538 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:10:08.032728   16538 cni.go:84] Creating CNI manager for "calico"
	I0222 21:10:08.075171   16538 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0222 21:10:07.617651   16663 api_server.go:268] stopped: https://127.0.0.1:52696/healthz: Get "https://127.0.0.1:52696/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0222 21:10:07.617741   16663 retry.go:31] will retry after 293.305999ms: state is "Stopped"
	I0222 21:10:07.911115   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:08.096268   16538 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0222 21:10:08.096282   16538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (235268 bytes)
	I0222 21:10:08.120722   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0222 21:10:09.233677   16538 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.112956629s)
	I0222 21:10:09.233700   16538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0222 21:10:09.233793   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321 minikube.k8s.io/name=calico-310000 minikube.k8s.io/updated_at=2023_02_22T21_10_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:09.233793   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:09.339715   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:09.390809   16538 ops.go:34] apiserver oom_adj: -16
	I0222 21:10:09.950845   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:10.450637   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:10.950796   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:11.450575   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:11.950691   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:12.450099   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:12.911308   16663 api_server.go:268] stopped: https://127.0.0.1:52696/healthz: Get "https://127.0.0.1:52696/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0222 21:10:12.911341   16663 retry.go:31] will retry after 321.864281ms: state is "Stopped"
	I0222 21:10:12.950472   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:13.451091   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:13.950157   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:14.451558   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:14.950125   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:15.450604   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:15.951189   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:16.450067   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:16.950036   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:17.450007   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:13.235302   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:17.949996   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:18.450011   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:18.950471   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:19.450067   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:19.949982   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:20.449900   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:20.950010   16538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:10:21.107791   16538 kubeadm.go:1073] duration metric: took 11.87430339s to wait for elevateKubeSystemPrivileges.
	I0222 21:10:21.107812   16538 kubeadm.go:403] StartCluster complete in 30.953429805s
	I0222 21:10:21.107833   16538 settings.go:142] acquiring lock: {Name:mk09b0ae3061a5d1df7256089aea48f15d65cbf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:10:21.107933   16538 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:10:21.109010   16538 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:10:21.109393   16538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0222 21:10:21.109415   16538 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0222 21:10:21.109534   16538 addons.go:65] Setting storage-provisioner=true in profile "calico-310000"
	I0222 21:10:21.109572   16538 addons.go:65] Setting default-storageclass=true in profile "calico-310000"
	I0222 21:10:21.109575   16538 addons.go:227] Setting addon storage-provisioner=true in "calico-310000"
	I0222 21:10:21.109582   16538 config.go:182] Loaded profile config "calico-310000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:10:21.109598   16538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-310000"
	I0222 21:10:21.109649   16538 host.go:66] Checking if "calico-310000" exists ...
	I0222 21:10:21.110013   16538 cli_runner.go:164] Run: docker container inspect calico-310000 --format={{.State.Status}}
	I0222 21:10:21.110193   16538 cli_runner.go:164] Run: docker container inspect calico-310000 --format={{.State.Status}}
	I0222 21:10:21.199027   16538 addons.go:227] Setting addon default-storageclass=true in "calico-310000"
	I0222 21:10:21.234011   16538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 21:10:21.234075   16538 host.go:66] Checking if "calico-310000" exists ...
	I0222 21:10:21.253991   16538 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 21:10:21.254004   16538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0222 21:10:21.254088   16538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-310000
	I0222 21:10:21.255239   16538 cli_runner.go:164] Run: docker container inspect calico-310000 --format={{.State.Status}}
	I0222 21:10:21.265272   16538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0222 21:10:21.340344   16538 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0222 21:10:21.340384   16538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0222 21:10:21.340507   16538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-310000
	I0222 21:10:21.341310   16538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53286 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/calico-310000/id_rsa Username:docker}
	I0222 21:10:21.413253   16538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53286 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/calico-310000/id_rsa Username:docker}
	I0222 21:10:21.535705   16538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0222 21:10:21.553762   16538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 21:10:21.691563   16538 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-310000" context rescaled to 1 replicas
	I0222 21:10:21.691591   16538 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 21:10:21.715027   16538 out.go:177] * Verifying Kubernetes components...
	I0222 21:10:21.735798   16538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:10:22.424763   16538 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.159467123s)
	I0222 21:10:22.424799   16538 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0222 21:10:22.621214   16538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.067446773s)
	I0222 21:10:22.621332   16538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-310000
	I0222 21:10:22.646300   16538 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0222 21:10:22.720452   16538 addons.go:492] enable addons completed in 1.611027239s: enabled=[default-storageclass storage-provisioner]
	I0222 21:10:22.737612   16538 node_ready.go:35] waiting up to 15m0s for node "calico-310000" to be "Ready" ...
	I0222 21:10:22.742893   16538 node_ready.go:49] node "calico-310000" has status "Ready":"True"
	I0222 21:10:22.742909   16538 node_ready.go:38] duration metric: took 5.253762ms waiting for node "calico-310000" to be "Ready" ...
	I0222 21:10:22.742920   16538 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 21:10:22.757388   16538 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-7bdbfc669-rkw7f" in "kube-system" namespace to be "Ready" ...
	I0222 21:10:18.236091   16663 api_server.go:268] stopped: https://127.0.0.1:52696/healthz: Get "https://127.0.0.1:52696/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0222 21:10:18.236128   16663 api_server.go:165] Checking apiserver status ...
	I0222 21:10:18.236221   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:10:18.247925   16663 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12320/cgroup
	W0222 21:10:18.255983   16663 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:18.256050   16663 ssh_runner.go:195] Run: ls
	I0222 21:10:18.260244   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:20.991807   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0222 21:10:20.991842   16663 retry.go:31] will retry after 301.749981ms: https://127.0.0.1:52696/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0222 21:10:21.293753   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:21.301807   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:10:21.301842   16663 retry.go:31] will retry after 276.816558ms: https://127.0.0.1:52696/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:10:21.578911   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:21.585389   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:10:21.585410   16663 retry.go:31] will retry after 476.451646ms: https://127.0.0.1:52696/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:10:22.063742   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:22.072045   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:10:22.072087   16663 retry.go:31] will retry after 434.53216ms: https://127.0.0.1:52696/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:10:22.506727   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:22.515227   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 200:
	ok
	I0222 21:10:22.531801   16663 system_pods.go:86] 5 kube-system pods found
	I0222 21:10:22.531823   16663 system_pods.go:89] "etcd-kubernetes-upgrade-038000" [b339f895-e2d2-4f99-80a8-27fd05ee4ecc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0222 21:10:22.531831   16663 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-038000" [b8c50d98-78f2-4f6f-8e01-4de788a829f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0222 21:10:22.531835   16663 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-038000" [8748bee7-8c2f-482c-a4a5-ba6937e1565e] Running
	I0222 21:10:22.531843   16663 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-038000" [5fa5b701-04b0-4854-bba7-f228f764e806] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0222 21:10:22.531852   16663 system_pods.go:89] "storage-provisioner" [e5842114-08c1-45f5-b602-d025e6dcc2f8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0222 21:10:22.531861   16663 kubeadm.go:617] needs reconfigure: missing components: kube-dns, kube-proxy
	I0222 21:10:22.531869   16663 kubeadm.go:1120] stopping kube-system containers ...
	I0222 21:10:22.531980   16663 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:10:22.560231   16663 docker.go:456] Stopping containers: [1641955c419d b429b3682b79 ce8140898c3f 430bec760be0 b52475c02717 fc15fc425c77 8b2e3c6638cb d05014e0c038 4c001e5ef006 17936696d3a0 6193fbf503f5 522fd48f3de8 45d1585757f4 eefdfd3e86cd 221fd193dffb a1e53be9f9b1 d8afd0efbc5c 57882ed70547 9cd6863f924a]
	I0222 21:10:22.560315   16663 ssh_runner.go:195] Run: docker stop 1641955c419d b429b3682b79 ce8140898c3f 430bec760be0 b52475c02717 fc15fc425c77 8b2e3c6638cb d05014e0c038 4c001e5ef006 17936696d3a0 6193fbf503f5 522fd48f3de8 45d1585757f4 eefdfd3e86cd 221fd193dffb a1e53be9f9b1 d8afd0efbc5c 57882ed70547 9cd6863f924a
	I0222 21:10:24.777271   16538 pod_ready.go:102] pod "calico-kube-controllers-7bdbfc669-rkw7f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:10:26.794051   16538 pod_ready.go:102] pod "calico-kube-controllers-7bdbfc669-rkw7f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:10:23.225745   16663 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0222 21:10:23.285361   16663 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:10:23.301274   16663 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 23 05:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 23 05:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Feb 23 05:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 23 05:09 /etc/kubernetes/scheduler.conf
	
	I0222 21:10:23.301374   16663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0222 21:10:23.315835   16663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0222 21:10:23.331425   16663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0222 21:10:23.345535   16663 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:23.345622   16663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0222 21:10:23.358276   16663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0222 21:10:23.373562   16663 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:10:23.373636   16663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0222 21:10:23.386165   16663 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:10:23.402779   16663 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0222 21:10:23.402796   16663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:10:23.485220   16663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:10:23.967786   16663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:10:24.153672   16663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:10:24.231660   16663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:10:24.325324   16663 api_server.go:51] waiting for apiserver process to appear ...
	I0222 21:10:24.325405   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:10:24.838558   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:10:25.338841   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:10:25.390392   16663 api_server.go:71] duration metric: took 1.065094066s to wait for apiserver process to appear ...
	I0222 21:10:25.390422   16663 api_server.go:87] waiting for apiserver healthz status ...
	I0222 21:10:25.390432   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:29.149549   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0222 21:10:29.149572   16663 api_server.go:102] status: https://127.0.0.1:52696/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0222 21:10:29.649670   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:29.655017   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0222 21:10:29.655031   16663 api_server.go:102] status: https://127.0.0.1:52696/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:10:30.149917   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:30.156565   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0222 21:10:30.156583   16663 api_server.go:102] status: https://127.0.0.1:52696/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:10:30.649698   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:30.655541   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 200:
	ok
	I0222 21:10:30.663018   16663 api_server.go:140] control plane version: v1.26.1
	I0222 21:10:30.663033   16663 api_server.go:130] duration metric: took 5.272712911s to wait for apiserver health ...
	I0222 21:10:30.663040   16663 cni.go:84] Creating CNI manager for ""
	I0222 21:10:30.663048   16663 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:10:30.693591   16663 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0222 21:10:30.715151   16663 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0222 21:10:30.729721   16663 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0222 21:10:30.751535   16663 system_pods.go:43] waiting for kube-system pods to appear ...
	I0222 21:10:30.759556   16663 system_pods.go:59] 5 kube-system pods found
	I0222 21:10:30.759576   16663 system_pods.go:61] "etcd-kubernetes-upgrade-038000" [b339f895-e2d2-4f99-80a8-27fd05ee4ecc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0222 21:10:30.759583   16663 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-038000" [b8c50d98-78f2-4f6f-8e01-4de788a829f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0222 21:10:30.759594   16663 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-038000" [8748bee7-8c2f-482c-a4a5-ba6937e1565e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0222 21:10:30.759601   16663 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-038000" [5fa5b701-04b0-4854-bba7-f228f764e806] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0222 21:10:30.759634   16663 system_pods.go:61] "storage-provisioner" [e5842114-08c1-45f5-b602-d025e6dcc2f8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0222 21:10:30.759644   16663 system_pods.go:74] duration metric: took 8.098552ms to wait for pod list to return data ...
	I0222 21:10:30.759653   16663 node_conditions.go:102] verifying NodePressure condition ...
	I0222 21:10:30.763731   16663 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 21:10:30.763746   16663 node_conditions.go:123] node cpu capacity is 6
	I0222 21:10:30.763781   16663 node_conditions.go:105] duration metric: took 4.119162ms to run NodePressure ...
	I0222 21:10:30.763807   16663 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:10:30.975158   16663 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0222 21:10:30.983047   16663 ops.go:34] apiserver oom_adj: -16
	I0222 21:10:30.983057   16663 kubeadm.go:637] restartCluster took 33.488781778s
	I0222 21:10:30.983062   16663 kubeadm.go:403] StartCluster complete in 33.516711636s
	I0222 21:10:30.983075   16663 settings.go:142] acquiring lock: {Name:mk09b0ae3061a5d1df7256089aea48f15d65cbf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:10:30.983150   16663 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:10:30.983847   16663 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:10:30.984096   16663 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0222 21:10:30.984129   16663 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0222 21:10:30.984192   16663 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-038000"
	I0222 21:10:30.984206   16663 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-038000"
	I0222 21:10:30.984208   16663 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-038000"
	I0222 21:10:30.984231   16663 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-038000"
	I0222 21:10:30.984242   16663 config.go:182] Loaded profile config "kubernetes-upgrade-038000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:10:30.984257   16663 host.go:66] Checking if "kubernetes-upgrade-038000" exists ...
	I0222 21:10:30.984492   16663 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-038000 --format={{.State.Status}}
	I0222 21:10:30.984581   16663 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-038000 --format={{.State.Status}}
	I0222 21:10:30.984612   16663 kapi.go:59] client config for kubernetes-upgrade-038000: &rest.Config{Host:"https://127.0.0.1:52696", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 21:10:30.991020   16663 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-038000" context rescaled to 1 replicas
	I0222 21:10:30.991059   16663 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 21:10:31.013002   16663 out.go:177] * Verifying Kubernetes components...
	I0222 21:10:31.054076   16663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:10:31.071445   16663 kapi.go:59] client config for kubernetes-upgrade-038000: &rest.Config{Host:"https://127.0.0.1:52696", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubernetes-upgrade-038000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0222 21:10:31.093067   16663 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 21:10:31.079189   16663 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-038000"
	I0222 21:10:31.086433   16663 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0222 21:10:31.086484   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	W0222 21:10:31.130069   16663 addons.go:236] addon default-storageclass should already be in state true
	I0222 21:10:31.130106   16663 host.go:66] Checking if "kubernetes-upgrade-038000" exists ...
	I0222 21:10:31.130174   16663 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 21:10:31.130187   16663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0222 21:10:31.130258   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:10:31.133805   16663 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-038000 --format={{.State.Status}}
	I0222 21:10:31.217985   16663 api_server.go:51] waiting for apiserver process to appear ...
	I0222 21:10:31.218109   16663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:10:31.219311   16663 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0222 21:10:31.219327   16663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0222 21:10:31.219467   16663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-038000
	I0222 21:10:31.223835   16663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52697 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:10:31.237855   16663 api_server.go:71] duration metric: took 246.773797ms to wait for apiserver process to appear ...
	I0222 21:10:31.237893   16663 api_server.go:87] waiting for apiserver healthz status ...
	I0222 21:10:31.237913   16663 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52696/healthz ...
	I0222 21:10:31.247242   16663 api_server.go:278] https://127.0.0.1:52696/healthz returned 200:
	ok
	I0222 21:10:31.258134   16663 api_server.go:140] control plane version: v1.26.1
	I0222 21:10:31.258151   16663 api_server.go:130] duration metric: took 20.244949ms to wait for apiserver health ...
	I0222 21:10:31.258159   16663 system_pods.go:43] waiting for kube-system pods to appear ...
	I0222 21:10:31.264214   16663 system_pods.go:59] 5 kube-system pods found
	I0222 21:10:31.264236   16663 system_pods.go:61] "etcd-kubernetes-upgrade-038000" [b339f895-e2d2-4f99-80a8-27fd05ee4ecc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0222 21:10:31.264243   16663 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-038000" [b8c50d98-78f2-4f6f-8e01-4de788a829f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0222 21:10:31.264256   16663 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-038000" [8748bee7-8c2f-482c-a4a5-ba6937e1565e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0222 21:10:31.264264   16663 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-038000" [5fa5b701-04b0-4854-bba7-f228f764e806] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0222 21:10:31.264272   16663 system_pods.go:61] "storage-provisioner" [e5842114-08c1-45f5-b602-d025e6dcc2f8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0222 21:10:31.264281   16663 system_pods.go:74] duration metric: took 6.117536ms to wait for pod list to return data ...
	I0222 21:10:31.264288   16663 kubeadm.go:578] duration metric: took 273.212579ms to wait for : map[apiserver:true system_pods:true] ...
	I0222 21:10:31.264300   16663 node_conditions.go:102] verifying NodePressure condition ...
	I0222 21:10:31.268000   16663 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 21:10:31.268016   16663 node_conditions.go:123] node cpu capacity is 6
	I0222 21:10:31.268031   16663 node_conditions.go:105] duration metric: took 3.726716ms to run NodePressure ...
	I0222 21:10:31.268039   16663 start.go:228] waiting for startup goroutines ...
	I0222 21:10:31.298121   16663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52697 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/kubernetes-upgrade-038000/id_rsa Username:docker}
	I0222 21:10:31.359036   16663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 21:10:31.413847   16663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0222 21:10:32.075792   16663 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0222 21:10:32.136281   16663 addons.go:492] enable addons completed in 1.152150526s: enabled=[storage-provisioner default-storageclass]
	I0222 21:10:32.136366   16663 start.go:233] waiting for cluster config update ...
	I0222 21:10:32.136385   16663 start.go:242] writing updated cluster config ...
	I0222 21:10:32.136751   16663 ssh_runner.go:195] Run: rm -f paused
	I0222 21:10:32.177843   16663 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0222 21:10:32.201277   16663 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-038000" cluster and "default" namespace by default
	I0222 21:10:28.804567   16538 pod_ready.go:102] pod "calico-kube-controllers-7bdbfc669-rkw7f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:10:30.806729   16538 pod_ready.go:102] pod "calico-kube-controllers-7bdbfc669-rkw7f" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 05:05:09 UTC, end at Thu 2023-02-23 05:10:33 UTC. --
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.068762761Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.068781712Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.068796557Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.068819728Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.068881166Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.069094018Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.069164859Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.069596381Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.081620039Z" level=info msg="Loading containers: start."
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.184384766Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.218442626Z" level=info msg="Loading containers: done."
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.226654079Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.226725807Z" level=info msg="Daemon has completed initialization"
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.248487773Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 05:09:56 kubernetes-upgrade-038000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.252060050Z" level=info msg="API listen on [::]:2376"
	Feb 23 05:09:56 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:09:56.254956860Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 05:10:22 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:10:22.643315560Z" level=info msg="ignoring event" container=fc15fc425c77a03001b33f79e6df886837531e8de11dfb6b73c2bc7147e5e4b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 05:10:22 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:10:22.649780384Z" level=info msg="ignoring event" container=d05014e0c03852c587d527be1ce97cf933d0b3c14c8120969e3726b5fb6c5363 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 05:10:22 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:10:22.690944081Z" level=info msg="ignoring event" container=b429b3682b79efc9869ce840a319fe461faf99e6ac91363375d82adab43086f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 05:10:22 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:10:22.690972935Z" level=info msg="ignoring event" container=8b2e3c6638cb027546ab14f15a292985052f6b84e1896eb52183d2b7e5c9f4d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 05:10:22 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:10:22.690994062Z" level=info msg="ignoring event" container=ce8140898c3f6652766cca393665c591eb2457642bc65a4637d8e875281f5f94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 05:10:22 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:10:22.703080854Z" level=info msg="ignoring event" container=b52475c02717b625fd79bae5487f8933983d0fdf7414d4ed081ca6c8cee9173c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 05:10:22 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:10:22.778357733Z" level=info msg="ignoring event" container=1641955c419d90aa2dc262ec3f47b1fd1c5335155d34bcf1ee66b193b09ed822 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 05:10:23 kubernetes-upgrade-038000 dockerd[11550]: time="2023-02-23T05:10:23.128910434Z" level=info msg="ignoring event" container=430bec760be0e8a5b9950e56b6aff399b0393c97e1336457678ec707cec82e62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5b7b86d89a88f       fce326961ae2d       8 seconds ago       Running             etcd                      3                   bb1a5e5ebb1cc
	04dd25363bc4e       e9c08e11b07f6       9 seconds ago       Running             kube-controller-manager   3                   c4e2ad576db59
	77d802d0162cb       deb04688c4a35       9 seconds ago       Running             kube-apiserver            2                   7b0fe23d2f63d
	abee24a395bbf       655493523f607       9 seconds ago       Running             kube-scheduler            3                   5c324214e619f
	1641955c419d9       fce326961ae2d       16 seconds ago      Exited              etcd                      2                   8b2e3c6638cb0
	b429b3682b79e       655493523f607       17 seconds ago      Exited              kube-scheduler            2                   d05014e0c0385
	ce8140898c3f6       e9c08e11b07f6       24 seconds ago      Exited              kube-controller-manager   2                   b52475c02717b
	430bec760be0e       deb04688c4a35       31 seconds ago      Exited              kube-apiserver            1                   fc15fc425c77a
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-038000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-038000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321
	                    minikube.k8s.io/name=kubernetes-upgrade-038000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_22T21_09_45_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 05:09:42 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-038000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 05:10:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 05:10:29 +0000   Thu, 23 Feb 2023 05:09:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 05:10:29 +0000   Thu, 23 Feb 2023 05:09:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 05:10:29 +0000   Thu, 23 Feb 2023 05:09:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 05:10:29 +0000   Thu, 23 Feb 2023 05:09:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-038000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    14aace2c-fe48-40d9-b364-15d456a94896
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-038000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         49s
	  kube-system                 kube-apiserver-kubernetes-upgrade-038000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-038000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-scheduler-kubernetes-upgrade-038000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  56s (x5 over 56s)  kubelet  Node kubernetes-upgrade-038000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x4 over 56s)  kubelet  Node kubernetes-upgrade-038000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x4 over 56s)  kubelet  Node kubernetes-upgrade-038000 status is now: NodeHasSufficientPID
	  Normal  Starting                 49s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  49s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  49s                kubelet  Node kubernetes-upgrade-038000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s                kubelet  Node kubernetes-upgrade-038000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s                kubelet  Node kubernetes-upgrade-038000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                46s                kubelet  Node kubernetes-upgrade-038000 status is now: NodeReady
	  Normal  Starting                 10s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet  Node kubernetes-upgrade-038000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet  Node kubernetes-upgrade-038000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet  Node kubernetes-upgrade-038000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000081] FS-Cache: O-key=[8] '9b91130600000000'
	[  +0.000132] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000083] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=00000000defe59bd
	[  +0.000064] FS-Cache: N-key=[8] '9b91130600000000'
	[  +0.003548] FS-Cache: Duplicate cookie detected
	[  +0.000041] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000053] FS-Cache: O-cookie d=00000000d375b396{9p.inode} n=00000000431c20f9
	[  +0.000062] FS-Cache: O-key=[8] '9b91130600000000'
	[  +0.000127] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000080] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=0000000013b0fbbe
	[  +0.000045] FS-Cache: N-key=[8] '9b91130600000000'
	[  +3.557940] FS-Cache: Duplicate cookie detected
	[  +0.000036] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000054] FS-Cache: O-cookie d=00000000d375b396{9p.inode} n=000000005612b0fe
	[  +0.000059] FS-Cache: O-key=[8] '9a91130600000000'
	[  +0.000042] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000042] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=000000007465c420
	[  +0.000051] FS-Cache: N-key=[8] '9a91130600000000'
	[  +0.500925] FS-Cache: Duplicate cookie detected
	[  +0.000054] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000033] FS-Cache: O-cookie d=00000000d375b396{9p.inode} n=0000000059e8f346
	[  +0.000062] FS-Cache: O-key=[8] 'b991130600000000'
	[  +0.000047] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000043] FS-Cache: N-cookie d=00000000d375b396{9p.inode} n=000000007c126f1c
	[  +0.000043] FS-Cache: N-key=[8] 'b991130600000000'
	
	* 
	* ==> etcd [1641955c419d] <==
	* {"level":"info","ts":"2023-02-23T05:10:17.456Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T05:10:17.456Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-23T05:10:17.456Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-23T05:10:17.456Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T05:10:17.456Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T05:10:18.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-02-23T05:10:18.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-02-23T05:10:18.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-02-23T05:10:18.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-02-23T05:10:18.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-02-23T05:10:18.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-02-23T05:10:18.548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-02-23T05:10:18.549Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-038000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T05:10:18.549Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T05:10:18.549Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T05:10:18.549Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T05:10:18.549Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T05:10:18.550Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T05:10:18.550Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-02-23T05:10:22.630Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-23T05:10:22.630Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-038000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-02-23T05:10:22.644Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-02-23T05:10:22.731Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-23T05:10:22.732Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-23T05:10:22.733Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-038000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [5b7b86d89a88] <==
	* {"level":"info","ts":"2023-02-23T05:10:26.096Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T05:10:26.096Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T05:10:26.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-02-23T05:10:26.097Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-02-23T05:10:26.097Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T05:10:26.097Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T05:10:26.098Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T05:10:26.099Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T05:10:26.099Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T05:10:26.099Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-23T05:10:26.099Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-23T05:10:27.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-02-23T05:10:27.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-02-23T05:10:27.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-02-23T05:10:27.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-02-23T05:10:27.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-02-23T05:10:27.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-02-23T05:10:27.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-02-23T05:10:27.948Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-038000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T05:10:27.948Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T05:10:27.948Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T05:10:27.948Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T05:10:27.948Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T05:10:27.949Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-02-23T05:10:27.949Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  05:10:34 up  1:09,  0 users,  load average: 2.79, 1.85, 1.50
	Linux kubernetes-upgrade-038000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [430bec760be0] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 05:10:22.636876       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 05:10:22.636932       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 05:10:22.636964       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [77d802d0162c] <==
	* I0223 05:10:29.146846       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0223 05:10:29.147115       1 controller.go:85] Starting OpenAPI V3 controller
	I0223 05:10:29.147233       1 naming_controller.go:291] Starting NamingConditionController
	I0223 05:10:29.147352       1 establishing_controller.go:76] Starting EstablishingController
	I0223 05:10:29.147444       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0223 05:10:29.147122       1 controller.go:85] Starting OpenAPI controller
	I0223 05:10:29.147505       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0223 05:10:29.147514       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0223 05:10:29.228063       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 05:10:29.228106       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 05:10:29.234314       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 05:10:29.237767       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 05:10:29.246439       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 05:10:29.246488       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 05:10:29.246475       1 cache.go:39] Caches are synced for autoregister controller
	I0223 05:10:29.246973       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 05:10:29.275322       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 05:10:29.305638       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0223 05:10:29.885497       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 05:10:30.149395       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 05:10:30.885121       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 05:10:30.895739       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 05:10:30.936716       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 05:10:30.960742       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 05:10:30.965784       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [04dd25363bc4] <==
	* I0223 05:10:31.266031       1 controllermanager.go:622] Started "clusterrole-aggregation"
	I0223 05:10:31.266147       1 clusterroleaggregation_controller.go:188] Starting ClusterRoleAggregator
	I0223 05:10:31.266154       1 shared_informer.go:273] Waiting for caches to sync for ClusterRoleAggregator
	I0223 05:10:31.273039       1 controllermanager.go:622] Started "pv-protection"
	I0223 05:10:31.273133       1 pv_protection_controller.go:75] Starting PV protection controller
	I0223 05:10:31.273142       1 shared_informer.go:273] Waiting for caches to sync for PV protection
	I0223 05:10:31.276508       1 controllermanager.go:622] Started "disruption"
	I0223 05:10:31.276561       1 disruption.go:424] Sending events to api server.
	I0223 05:10:31.276694       1 disruption.go:435] Starting disruption controller
	I0223 05:10:31.276701       1 shared_informer.go:273] Waiting for caches to sync for disruption
	E0223 05:10:31.278633       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	W0223 05:10:31.278677       1 controllermanager.go:600] Skipping "service"
	I0223 05:10:31.292176       1 controllermanager.go:622] Started "endpoint"
	I0223 05:10:31.292483       1 endpoints_controller.go:178] Starting endpoint controller
	I0223 05:10:31.293015       1 shared_informer.go:273] Waiting for caches to sync for endpoint
	I0223 05:10:31.296591       1 controllermanager.go:622] Started "serviceaccount"
	I0223 05:10:31.296733       1 serviceaccounts_controller.go:111] Starting service account controller
	I0223 05:10:31.296755       1 shared_informer.go:273] Waiting for caches to sync for service account
	I0223 05:10:31.404286       1 controllermanager.go:622] Started "garbagecollector"
	I0223 05:10:31.404558       1 garbagecollector.go:154] Starting garbage collector controller
	I0223 05:10:31.404612       1 shared_informer.go:273] Waiting for caches to sync for garbage collector
	I0223 05:10:31.404654       1 graph_builder.go:291] GraphBuilder running
	I0223 05:10:31.458416       1 controllermanager.go:622] Started "job"
	I0223 05:10:31.458512       1 job_controller.go:191] Starting job controller
	I0223 05:10:31.458518       1 shared_informer.go:273] Waiting for caches to sync for job
	
	* 
	* ==> kube-controller-manager [ce8140898c3f] <==
	* I0223 05:10:22.352874       1 tokencleaner.go:111] Starting token cleaner controller
	I0223 05:10:22.352880       1 shared_informer.go:273] Waiting for caches to sync for token_cleaner
	I0223 05:10:22.352887       1 shared_informer.go:280] Caches are synced for token_cleaner
	I0223 05:10:22.355159       1 controllermanager.go:622] Started "serviceaccount"
	I0223 05:10:22.355240       1 serviceaccounts_controller.go:111] Starting service account controller
	I0223 05:10:22.355249       1 shared_informer.go:273] Waiting for caches to sync for service account
	I0223 05:10:22.356947       1 controllermanager.go:622] Started "ttl"
	I0223 05:10:22.357135       1 ttl_controller.go:120] Starting TTL controller
	I0223 05:10:22.357168       1 shared_informer.go:273] Waiting for caches to sync for TTL
	I0223 05:10:22.359231       1 controllermanager.go:622] Started "persistentvolume-expander"
	I0223 05:10:22.359245       1 expand_controller.go:340] Starting expand controller
	I0223 05:10:22.359429       1 shared_informer.go:273] Waiting for caches to sync for expand
	I0223 05:10:22.361145       1 controllermanager.go:622] Started "ttl-after-finished"
	I0223 05:10:22.361264       1 ttlafterfinished_controller.go:104] Starting TTL after finished controller
	I0223 05:10:22.361275       1 shared_informer.go:273] Waiting for caches to sync for TTL after finished
	I0223 05:10:22.362559       1 controllermanager.go:622] Started "cronjob"
	I0223 05:10:22.362675       1 cronjob_controllerv2.go:137] "Starting cronjob controller v2"
	I0223 05:10:22.362710       1 shared_informer.go:273] Waiting for caches to sync for cronjob
	I0223 05:10:22.379811       1 controllermanager.go:622] Started "namespace"
	I0223 05:10:22.379872       1 namespace_controller.go:195] Starting namespace controller
	I0223 05:10:22.379880       1 shared_informer.go:273] Waiting for caches to sync for namespace
	I0223 05:10:22.384852       1 controllermanager.go:622] Started "statefulset"
	I0223 05:10:22.385019       1 stateful_set.go:152] Starting stateful set controller
	I0223 05:10:22.385053       1 shared_informer.go:273] Waiting for caches to sync for stateful set
	I0223 05:10:22.404713       1 shared_informer.go:280] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [abee24a395bb] <==
	* W0223 05:10:29.223651       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 05:10:29.223715       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 05:10:29.223835       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 05:10:29.223994       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 05:10:29.224154       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 05:10:29.224217       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 05:10:29.224352       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 05:10:29.224420       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 05:10:29.224562       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0223 05:10:29.224680       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0223 05:10:29.225136       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0223 05:10:29.225311       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0223 05:10:29.225496       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0223 05:10:29.225664       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0223 05:10:29.225814       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 05:10:29.225939       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0223 05:10:29.226094       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 05:10:29.226167       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 05:10:29.226308       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 05:10:29.226420       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 05:10:29.226720       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0223 05:10:29.226847       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0223 05:10:29.227087       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0223 05:10:29.227435       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0223 05:10:30.215333       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [b429b3682b79] <==
	* I0223 05:10:16.693763       1 serving.go:348] Generated self-signed cert in-memory
	W0223 05:10:21.097159       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0223 05:10:21.099087       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0223 05:10:21.099120       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0223 05:10:21.099126       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0223 05:10:21.122023       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0223 05:10:21.122333       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 05:10:21.123955       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0223 05:10:21.124598       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0223 05:10:21.124807       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0223 05:10:21.124930       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0223 05:10:21.225439       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0223 05:10:22.617679       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0223 05:10:22.617803       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0223 05:10:22.618067       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 05:05:09 UTC, end at Thu 2023-02-23 05:10:36 UTC. --
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695262   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0512c813a576e2b16b3669afd9b2ee83-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-038000\" (UID: \"0512c813a576e2b16b3669afd9b2ee83\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695358   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f14ffc17c50f2b05d33885895a93d995-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-038000\" (UID: \"f14ffc17c50f2b05d33885895a93d995\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695418   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0512c813a576e2b16b3669afd9b2ee83-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-038000\" (UID: \"0512c813a576e2b16b3669afd9b2ee83\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695591   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0512c813a576e2b16b3669afd9b2ee83-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-038000\" (UID: \"0512c813a576e2b16b3669afd9b2ee83\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695622   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0512c813a576e2b16b3669afd9b2ee83-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-038000\" (UID: \"0512c813a576e2b16b3669afd9b2ee83\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695645   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f14ffc17c50f2b05d33885895a93d995-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-038000\" (UID: \"f14ffc17c50f2b05d33885895a93d995\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695662   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8739bb50f1a088bf32060d092a93a68a-etcd-data\") pod \"etcd-kubernetes-upgrade-038000\" (UID: \"8739bb50f1a088bf32060d092a93a68a\") " pod="kube-system/etcd-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695720   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0512c813a576e2b16b3669afd9b2ee83-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-038000\" (UID: \"0512c813a576e2b16b3669afd9b2ee83\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695763   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f14ffc17c50f2b05d33885895a93d995-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-038000\" (UID: \"f14ffc17c50f2b05d33885895a93d995\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695784   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f14ffc17c50f2b05d33885895a93d995-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-038000\" (UID: \"f14ffc17c50f2b05d33885895a93d995\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.695801   13141 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f14ffc17c50f2b05d33885895a93d995-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-038000\" (UID: \"f14ffc17c50f2b05d33885895a93d995\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-038000"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.831743   13141 scope.go:115] "RemoveContainer" containerID="b429b3682b79efc9869ce840a319fe461faf99e6ac91363375d82adab43086f4"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.848646   13141 scope.go:115] "RemoveContainer" containerID="430bec760be0e8a5b9950e56b6aff399b0393c97e1336457678ec707cec82e62"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:24.856776   13141 scope.go:115] "RemoveContainer" containerID="ce8140898c3f6652766cca393665c591eb2457642bc65a4637d8e875281f5f94"
	Feb 23 05:10:24 kubernetes-upgrade-038000 kubelet[13141]: E0223 05:10:24.894902   13141 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-038000?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 23 05:10:25 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:25.029839   13141 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-038000"
	Feb 23 05:10:25 kubernetes-upgrade-038000 kubelet[13141]: E0223 05:10:25.030181   13141 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-038000"
	Feb 23 05:10:25 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:25.518865   13141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b2e3c6638cb027546ab14f15a292985052f6b84e1896eb52183d2b7e5c9f4d5"
	Feb 23 05:10:25 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:25.602735   13141 scope.go:115] "RemoveContainer" containerID="1641955c419d90aa2dc262ec3f47b1fd1c5335155d34bcf1ee66b193b09ed822"
	Feb 23 05:10:25 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:25.846019   13141 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-038000"
	Feb 23 05:10:29 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:29.269783   13141 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-038000"
	Feb 23 05:10:29 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:29.269885   13141 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-038000"
	Feb 23 05:10:29 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:29.292183   13141 apiserver.go:52] "Watching apiserver"
	Feb 23 05:10:29 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:29.393910   13141 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 23 05:10:29 kubernetes-upgrade-038000 kubelet[13141]: I0223 05:10:29.429836   13141 reconciler.go:41] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-038000 -n kubernetes-upgrade-038000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-038000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-038000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-038000 describe pod storage-provisioner: exit status 1 (84.343255ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-038000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-038000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-038000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-038000: (3.002646021s)
--- FAIL: TestKubernetesUpgrade (584.59s)

                                                
                                    
x
+
TestMissingContainerUpgrade (60.46s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1234656815.exe start -p missing-upgrade-422000 --memory=2200 --driver=docker 
E0222 21:00:03.126274    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1234656815.exe start -p missing-upgrade-422000 --memory=2200 --driver=docker : exit status 78 (44.647903785s)

                                                
                                                
-- stdout --
	* [missing-upgrade-422000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-422000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-422000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.51 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.44 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.36 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.67 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.59 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.70 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 78.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 91.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 118.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 131.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 144.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 180.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 194.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 247.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 261.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 274.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 287.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 300.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 314.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 353.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 378.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 391.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 418.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 431.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 444.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 471.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 480.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 505.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 532.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 05:00:19.981292090 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-422000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 05:00:39.237996914 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1234656815.exe start -p missing-upgrade-422000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1234656815.exe start -p missing-upgrade-422000 --memory=2200 --driver=docker : exit status 70 (4.188991758s)

                                                
                                                
-- stdout --
	* [missing-upgrade-422000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-422000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-422000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1234656815.exe start -p missing-upgrade-422000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1234656815.exe start -p missing-upgrade-422000 --memory=2200 --driver=docker : exit status 70 (3.97910426s)

                                                
                                                
-- stdout --
	* [missing-upgrade-422000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-422000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-422000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-02-22 21:00:52.868308 -0800 PST m=+2343.158867755
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-422000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-422000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "818cac022489b6de7d0365240e7f00b2842436525a536c75dfd9fadbbcdda796",
	        "Created": "2023-02-23T05:00:28.136678531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 172930,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T05:00:28.353222013Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/818cac022489b6de7d0365240e7f00b2842436525a536c75dfd9fadbbcdda796/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/818cac022489b6de7d0365240e7f00b2842436525a536c75dfd9fadbbcdda796/hostname",
	        "HostsPath": "/var/lib/docker/containers/818cac022489b6de7d0365240e7f00b2842436525a536c75dfd9fadbbcdda796/hosts",
	        "LogPath": "/var/lib/docker/containers/818cac022489b6de7d0365240e7f00b2842436525a536c75dfd9fadbbcdda796/818cac022489b6de7d0365240e7f00b2842436525a536c75dfd9fadbbcdda796-json.log",
	        "Name": "/missing-upgrade-422000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-422000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/56dbbaec093ab25a436c5e59e562ac034fbe672076a7681c22b2f967754940df-init/diff:/var/lib/docker/overlay2/3809bca7bbca31396676a567d8cbe8022543aa2fc7f8e1f35de623c1eb8f082c/diff:/var/lib/docker/overlay2/7001dfd98a66ae7d206f8987ed718dcb859bbeffba7889774896583e23a70be1/diff:/var/lib/docker/overlay2/3e2cba4e745744bab9fc827a2d7a5199fac7789d76a4facb78222078e4a585a0/diff:/var/lib/docker/overlay2/f09668468bd4667efac9aeaa9d511cbe2c0debe927d14f4ca4d2aa8ff6b7fce5/diff:/var/lib/docker/overlay2/485e4fe1c68a1f59490773170f989f8d0d2cba63452a4212d0684a11047bb198/diff:/var/lib/docker/overlay2/a0baaf5e1ef2c08611311992793a0826620f8353760ad43a4c67ebc2b59d6fe3/diff:/var/lib/docker/overlay2/8385b8aa04f58288a2be68f7088a8fdc84de87fa69443d398684880ff81e3539/diff:/var/lib/docker/overlay2/232086d746b0b4f53939037276e587a36adc633928f67cef6069ad9ef7edf129/diff:/var/lib/docker/overlay2/d10ec2445d5bb316752ece7f1716309fd649d76ee7c83f76896fab522f478ac0/diff:/var/lib/docker/overlay2/b847fb
4f6755a5b58ce60944e330646ac169caaa5cdc4c5a8019b76e24591b0c/diff:/var/lib/docker/overlay2/193a2c6d5ad0db4bfcb6f97ed5d474004348e4cbf66e61af7c3830e9839eda3c/diff:/var/lib/docker/overlay2/881021416a6946d1219c033d4b36022bd9533de329077c4e88d6e2dc466a3436/diff:/var/lib/docker/overlay2/edd49e29d6a52b87c75d59949320122c4bbcfa8eacc889eb325e5eaea003438e/diff:/var/lib/docker/overlay2/e8a183e5f2e1e64fa7f5b289b2e9db45676df1f7bd22effd06c5b7c6cacd3830/diff:/var/lib/docker/overlay2/5f76c205b1257281d0378e1d3004cc1dad398403b5cb45cb3e7d7ca89ffa6479/diff:/var/lib/docker/overlay2/30b9f978bf14c9c9ee8054b0344b28407ceea4febe6689782544b465847bc927/diff:/var/lib/docker/overlay2/7e737a2172758df4045b0e9accf71b33f6a919c4cc3c489d3852df9ca26863fe/diff:/var/lib/docker/overlay2/962dad0c4c8f3b1848af61a35084296d991fa7018ca46d3913d4f6dc2f0eeb4d/diff:/var/lib/docker/overlay2/cfc9515ab9b140dd3b8195b2930c8cff1cddcb712151b7510ca528e9952f4d93/diff:/var/lib/docker/overlay2/5e8d14faff3855891be36b221d1cffdd00638db060ff50e8b928760b348f40f5/diff:/var/lib/d
ocker/overlay2/395eb4b2380c1656ffafea4d8ec3deca3a5ab69ec638f821bb7a9c20aeb2eee0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56dbbaec093ab25a436c5e59e562ac034fbe672076a7681c22b2f967754940df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56dbbaec093ab25a436c5e59e562ac034fbe672076a7681c22b2f967754940df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56dbbaec093ab25a436c5e59e562ac034fbe672076a7681c22b2f967754940df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-422000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-422000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-422000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-422000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-422000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8fc4e9993ed39db487304bf5966d1e6e26bc952f18674284426216999061877c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8fc4e9993ed3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "2c868fb3234665e77e4f7fa532d069c075dc29480ce0bab670bb53d2cf58f5b7",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "243d1519a56171faf9c3f743a09d8363a9221131ce4b0ebb491009903f325875",
	                    "EndpointID": "2c868fb3234665e77e4f7fa532d069c075dc29480ce0bab670bb53d2cf58f5b7",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-422000 -n missing-upgrade-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-422000 -n missing-upgrade-422000: exit status 6 (375.919753ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 21:00:53.291340   13713 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-422000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-422000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-422000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-422000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-422000: (2.338401566s)
--- FAIL: TestMissingContainerUpgrade (60.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (56.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3453331419.exe start -p stopped-upgrade-634000 --memory=2200 --vm-driver=docker 
E0222 21:02:46.103270    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3453331419.exe start -p stopped-upgrade-634000 --memory=2200 --vm-driver=docker : exit status 70 (45.459475467s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-634000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1793375906
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 05:02:46.493445358 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-634000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 05:03:06.162248151 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-634000", then "minikube start -p stopped-upgrade-634000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.13 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.67 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 26.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 52.36 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 66.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 88.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 142.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 156.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 183.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 197.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 211.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 225.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 238.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 266.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 280.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 293.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 307.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 321.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 354.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 379.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 393.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 405.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 416.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 429.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 443.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 484.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 498.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 512.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 525.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 539.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 05:03:06.162248151 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3453331419.exe start -p stopped-upgrade-634000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3453331419.exe start -p stopped-upgrade-634000 --memory=2200 --vm-driver=docker : exit status 70 (4.386911199s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-634000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig555165190
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-634000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3453331419.exe start -p stopped-upgrade-634000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3453331419.exe start -p stopped-upgrade-634000 --memory=2200 --vm-driver=docker : exit status 70 (4.373139408s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-634000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig2895828043
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-634000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (56.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (251.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-865000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-865000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m11.254749485s)

                                                
                                                
-- stdout --
	* [old-k8s-version-865000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-865000 in cluster old-k8s-version-865000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 21:15:22.809218   20193 out.go:296] Setting OutFile to fd 1 ...
	I0222 21:15:22.809484   20193 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:15:22.809491   20193 out.go:309] Setting ErrFile to fd 2...
	I0222 21:15:22.809496   20193 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:15:22.809617   20193 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 21:15:22.811297   20193 out.go:303] Setting JSON to false
	I0222 21:15:22.832109   20193 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4497,"bootTime":1677124825,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 21:15:22.832190   20193 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 21:15:22.869858   20193 out.go:177] * [old-k8s-version-865000] minikube v1.29.0 on Darwin 13.2
	I0222 21:15:22.982110   20193 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 21:15:22.944862   20193 notify.go:220] Checking for updates...
	I0222 21:15:23.040596   20193 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:15:23.115031   20193 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 21:15:23.190002   20193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 21:15:23.264559   20193 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 21:15:23.323905   20193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 21:15:23.360666   20193 config.go:182] Loaded profile config "kubenet-310000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:15:23.360753   20193 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 21:15:23.427753   20193 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 21:15:23.427920   20193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:15:23.584947   20193 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:15:23.483225007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:15:23.624788   20193 out.go:177] * Using the docker driver based on user configuration
	I0222 21:15:23.682851   20193 start.go:296] selected driver: docker
	I0222 21:15:23.682883   20193 start.go:857] validating driver "docker" against <nil>
	I0222 21:15:23.682909   20193 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 21:15:23.686820   20193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:15:23.832754   20193 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:15:23.740307625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:15:23.832886   20193 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0222 21:15:23.833097   20193 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0222 21:15:23.854797   20193 out.go:177] * Using Docker Desktop driver with root privileges
	I0222 21:15:23.891701   20193 cni.go:84] Creating CNI manager for ""
	I0222 21:15:23.891740   20193 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0222 21:15:23.891754   20193 start_flags.go:319] config:
	{Name:old-k8s-version-865000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-865000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:15:23.913752   20193 out.go:177] * Starting control plane node old-k8s-version-865000 in cluster old-k8s-version-865000
	I0222 21:15:23.950855   20193 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 21:15:23.987864   20193 out.go:177] * Pulling base image ...
	I0222 21:15:24.024979   20193 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 21:15:24.025078   20193 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 21:15:24.025107   20193 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0222 21:15:24.025127   20193 cache.go:57] Caching tarball of preloaded images
	I0222 21:15:24.025386   20193 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 21:15:24.025423   20193 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0222 21:15:24.025647   20193 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/config.json ...
	I0222 21:15:24.026337   20193 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/config.json: {Name:mk92a5550f58eb5b42a60e5f19f57541a4fe93a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:15:24.081939   20193 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 21:15:24.081958   20193 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 21:15:24.082141   20193 cache.go:193] Successfully downloaded all kic artifacts
	I0222 21:15:24.082185   20193 start.go:364] acquiring machines lock for old-k8s-version-865000: {Name:mk6cb26d76424f531c6197cd66b272c20668b8f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 21:15:24.082343   20193 start.go:368] acquired machines lock for "old-k8s-version-865000" in 145.577µs
	I0222 21:15:24.082376   20193 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-865000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-865000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 21:15:24.082455   20193 start.go:125] createHost starting for "" (driver="docker")
	I0222 21:15:24.125737   20193 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0222 21:15:24.126184   20193 start.go:159] libmachine.API.Create for "old-k8s-version-865000" (driver="docker")
	I0222 21:15:24.126228   20193 client.go:168] LocalClient.Create starting
	I0222 21:15:24.126410   20193 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem
	I0222 21:15:24.126496   20193 main.go:141] libmachine: Decoding PEM data...
	I0222 21:15:24.126532   20193 main.go:141] libmachine: Parsing certificate...
	I0222 21:15:24.126652   20193 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem
	I0222 21:15:24.126717   20193 main.go:141] libmachine: Decoding PEM data...
	I0222 21:15:24.126734   20193 main.go:141] libmachine: Parsing certificate...
	I0222 21:15:24.127654   20193 cli_runner.go:164] Run: docker network inspect old-k8s-version-865000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0222 21:15:24.181562   20193 cli_runner.go:211] docker network inspect old-k8s-version-865000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0222 21:15:24.181686   20193 network_create.go:281] running [docker network inspect old-k8s-version-865000] to gather additional debugging logs...
	I0222 21:15:24.181706   20193 cli_runner.go:164] Run: docker network inspect old-k8s-version-865000
	W0222 21:15:24.235717   20193 cli_runner.go:211] docker network inspect old-k8s-version-865000 returned with exit code 1
	I0222 21:15:24.235742   20193 network_create.go:284] error running [docker network inspect old-k8s-version-865000]: docker network inspect old-k8s-version-865000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-865000
	I0222 21:15:24.235756   20193 network_create.go:286] output of [docker network inspect old-k8s-version-865000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-865000
	
	** /stderr **
	I0222 21:15:24.235839   20193 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0222 21:15:24.291693   20193 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0222 21:15:24.293073   20193 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0222 21:15:24.294603   20193 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0222 21:15:24.294922   20193 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00052ce60}
	I0222 21:15:24.294935   20193 network_create.go:123] attempt to create docker network old-k8s-version-865000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0222 21:15:24.295008   20193 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-865000 old-k8s-version-865000
	I0222 21:15:24.382717   20193 network_create.go:107] docker network old-k8s-version-865000 192.168.76.0/24 created
	I0222 21:15:24.382747   20193 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-865000" container
	I0222 21:15:24.382859   20193 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0222 21:15:24.448493   20193 cli_runner.go:164] Run: docker volume create old-k8s-version-865000 --label name.minikube.sigs.k8s.io=old-k8s-version-865000 --label created_by.minikube.sigs.k8s.io=true
	I0222 21:15:24.509062   20193 oci.go:103] Successfully created a docker volume old-k8s-version-865000
	I0222 21:15:24.509215   20193 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-865000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-865000 --entrypoint /usr/bin/test -v old-k8s-version-865000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0222 21:15:24.969253   20193 oci.go:107] Successfully prepared a docker volume old-k8s-version-865000
	I0222 21:15:24.969293   20193 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 21:15:24.969309   20193 kic.go:190] Starting extracting preloaded images to volume ...
	I0222 21:15:24.969434   20193 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-865000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0222 21:15:31.217633   20193 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-865000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.248205483s)
	I0222 21:15:31.217655   20193 kic.go:199] duration metric: took 6.248428 seconds to extract preloaded images to volume
	I0222 21:15:31.217764   20193 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0222 21:15:31.360184   20193 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-865000 --name old-k8s-version-865000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-865000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-865000 --network old-k8s-version-865000 --ip 192.168.76.2 --volume old-k8s-version-865000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0222 21:15:31.728066   20193 cli_runner.go:164] Run: docker container inspect old-k8s-version-865000 --format={{.State.Running}}
	I0222 21:15:31.791998   20193 cli_runner.go:164] Run: docker container inspect old-k8s-version-865000 --format={{.State.Status}}
	I0222 21:15:31.860282   20193 cli_runner.go:164] Run: docker exec old-k8s-version-865000 stat /var/lib/dpkg/alternatives/iptables
	I0222 21:15:31.998629   20193 oci.go:144] the created container "old-k8s-version-865000" has a running status.
	I0222 21:15:31.998672   20193 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa...
	I0222 21:15:32.259892   20193 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0222 21:15:32.363469   20193 cli_runner.go:164] Run: docker container inspect old-k8s-version-865000 --format={{.State.Status}}
	I0222 21:15:32.424392   20193 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0222 21:15:32.424411   20193 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-865000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0222 21:15:32.532182   20193 cli_runner.go:164] Run: docker container inspect old-k8s-version-865000 --format={{.State.Status}}
	I0222 21:15:32.589214   20193 machine.go:88] provisioning docker machine ...
	I0222 21:15:32.589256   20193 ubuntu.go:169] provisioning hostname "old-k8s-version-865000"
	I0222 21:15:32.589363   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:32.649128   20193 main.go:141] libmachine: Using SSH client type: native
	I0222 21:15:32.649517   20193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54450 <nil> <nil>}
	I0222 21:15:32.649530   20193 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-865000 && echo "old-k8s-version-865000" | sudo tee /etc/hostname
	I0222 21:15:32.791344   20193 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-865000
	
	I0222 21:15:32.791435   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:32.849718   20193 main.go:141] libmachine: Using SSH client type: native
	I0222 21:15:32.850089   20193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54450 <nil> <nil>}
	I0222 21:15:32.850102   20193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-865000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-865000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-865000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 21:15:32.984121   20193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:15:32.984144   20193 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 21:15:32.984161   20193 ubuntu.go:177] setting up certificates
	I0222 21:15:32.984174   20193 provision.go:83] configureAuth start
	I0222 21:15:32.984273   20193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-865000
	I0222 21:15:33.042444   20193 provision.go:138] copyHostCerts
	I0222 21:15:33.042548   20193 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 21:15:33.042563   20193 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 21:15:33.042699   20193 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 21:15:33.042916   20193 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 21:15:33.042922   20193 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 21:15:33.042990   20193 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 21:15:33.043135   20193 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 21:15:33.043144   20193 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 21:15:33.043210   20193 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 21:15:33.043335   20193 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-865000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-865000]
	I0222 21:15:33.094699   20193 provision.go:172] copyRemoteCerts
	I0222 21:15:33.094753   20193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 21:15:33.094811   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:33.154772   20193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54450 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:15:33.250873   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 21:15:33.268387   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0222 21:15:33.285707   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0222 21:15:33.302755   20193 provision.go:86] duration metric: configureAuth took 318.569888ms
	I0222 21:15:33.302770   20193 ubuntu.go:193] setting minikube options for container-runtime
	I0222 21:15:33.302920   20193 config.go:182] Loaded profile config "old-k8s-version-865000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0222 21:15:33.302987   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:33.360259   20193 main.go:141] libmachine: Using SSH client type: native
	I0222 21:15:33.360608   20193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54450 <nil> <nil>}
	I0222 21:15:33.360622   20193 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 21:15:33.493738   20193 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 21:15:33.493760   20193 ubuntu.go:71] root file system type: overlay
	I0222 21:15:33.493872   20193 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 21:15:33.493961   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:33.554003   20193 main.go:141] libmachine: Using SSH client type: native
	I0222 21:15:33.554343   20193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54450 <nil> <nil>}
	I0222 21:15:33.554390   20193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 21:15:33.699682   20193 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 21:15:33.699771   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:33.758046   20193 main.go:141] libmachine: Using SSH client type: native
	I0222 21:15:33.758436   20193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54450 <nil> <nil>}
	I0222 21:15:33.758449   20193 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 21:15:34.400745   20193 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 05:15:33.697897346 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0222 21:15:34.400772   20193 machine.go:91] provisioned docker machine in 1.811563633s
	I0222 21:15:34.400780   20193 client.go:171] LocalClient.Create took 10.274679647s
	I0222 21:15:34.400800   20193 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-865000" took 10.274752327s
	I0222 21:15:34.400812   20193 start.go:300] post-start starting for "old-k8s-version-865000" (driver="docker")
	I0222 21:15:34.400823   20193 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 21:15:34.400915   20193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 21:15:34.400969   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:34.464661   20193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54450 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:15:34.561012   20193 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 21:15:34.564621   20193 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 21:15:34.564639   20193 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 21:15:34.564646   20193 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 21:15:34.564652   20193 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 21:15:34.564663   20193 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 21:15:34.564781   20193 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 21:15:34.564968   20193 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 21:15:34.565168   20193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 21:15:34.572460   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:15:34.589836   20193 start.go:303] post-start completed in 189.015892ms
	I0222 21:15:34.590352   20193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-865000
	I0222 21:15:34.650618   20193 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/config.json ...
	I0222 21:15:34.651047   20193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 21:15:34.651105   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:34.708783   20193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54450 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:15:34.800351   20193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 21:15:34.804883   20193 start.go:128] duration metric: createHost completed in 10.722561881s
	I0222 21:15:34.804901   20193 start.go:83] releasing machines lock for "old-k8s-version-865000", held for 10.722690846s
	I0222 21:15:34.804991   20193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-865000
	I0222 21:15:34.862206   20193 ssh_runner.go:195] Run: cat /version.json
	I0222 21:15:34.862211   20193 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0222 21:15:34.862273   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:34.862296   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:34.934310   20193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54450 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:15:34.934344   20193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54450 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:15:35.244619   20193 ssh_runner.go:195] Run: systemctl --version
	I0222 21:15:35.249795   20193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 21:15:35.254706   20193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 21:15:35.274734   20193 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 21:15:35.274805   20193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0222 21:15:35.288626   20193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0222 21:15:35.296495   20193 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0222 21:15:35.296510   20193 start.go:485] detecting cgroup driver to use...
	I0222 21:15:35.296521   20193 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:15:35.296608   20193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:15:35.310038   20193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0222 21:15:35.318868   20193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 21:15:35.327739   20193 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 21:15:35.327805   20193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 21:15:35.336443   20193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:15:35.344906   20193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 21:15:35.353336   20193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:15:35.362047   20193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 21:15:35.370058   20193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 21:15:35.378815   20193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 21:15:35.386076   20193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 21:15:35.393426   20193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:15:35.464225   20193 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 21:15:35.537200   20193 start.go:485] detecting cgroup driver to use...
	I0222 21:15:35.537220   20193 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:15:35.537295   20193 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 21:15:35.550232   20193 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 21:15:35.550318   20193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 21:15:35.562695   20193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:15:35.577770   20193 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 21:15:35.676492   20193 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 21:15:35.762008   20193 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 21:15:35.762035   20193 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 21:15:35.776760   20193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:15:35.864235   20193 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 21:15:36.100794   20193 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:15:36.128196   20193 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:15:36.198973   20193 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0222 21:15:36.199181   20193 cli_runner.go:164] Run: docker exec -t old-k8s-version-865000 dig +short host.docker.internal
	I0222 21:15:36.325936   20193 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 21:15:36.326055   20193 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 21:15:36.330806   20193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:15:36.342404   20193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:15:36.406212   20193 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 21:15:36.406293   20193 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:15:36.429529   20193 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0222 21:15:36.429548   20193 docker.go:560] Images already preloaded, skipping extraction
	I0222 21:15:36.429649   20193 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:15:36.455191   20193 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0222 21:15:36.455207   20193 cache_images.go:84] Images are preloaded, skipping loading
	I0222 21:15:36.455309   20193 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 21:15:36.483834   20193 cni.go:84] Creating CNI manager for ""
	I0222 21:15:36.483858   20193 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0222 21:15:36.483880   20193 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 21:15:36.483905   20193 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-865000 NodeName:old-k8s-version-865000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 21:15:36.484031   20193 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-865000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-865000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 21:15:36.484134   20193 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-865000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-865000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 21:15:36.484215   20193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0222 21:15:36.492769   20193 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 21:15:36.492836   20193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 21:15:36.500718   20193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0222 21:15:36.513692   20193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 21:15:36.527206   20193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0222 21:15:36.540520   20193 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0222 21:15:36.544624   20193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:15:36.555078   20193 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000 for IP: 192.168.76.2
	I0222 21:15:36.555096   20193 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:15:36.555285   20193 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 21:15:36.555368   20193 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 21:15:36.555414   20193 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/client.key
	I0222 21:15:36.555427   20193 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/client.crt with IP's: []
	I0222 21:15:36.766134   20193 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/client.crt ...
	I0222 21:15:36.766152   20193 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/client.crt: {Name:mk66d23f21b13aaf2b17a214c7c2b148dd21f5bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:15:36.766448   20193 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/client.key ...
	I0222 21:15:36.766455   20193 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/client.key: {Name:mk0d47468cb8091d4284b6a5e277185a5cd67f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:15:36.766664   20193 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.key.31bdca25
	I0222 21:15:36.766679   20193 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0222 21:15:36.870267   20193 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.crt.31bdca25 ...
	I0222 21:15:36.870283   20193 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.crt.31bdca25: {Name:mk2706b5d0d00acf72ca6bc56e0d049cd81bf06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:15:36.870578   20193 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.key.31bdca25 ...
	I0222 21:15:36.870586   20193 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.key.31bdca25: {Name:mkcf5c95768f5b3695211128e6846a0c0c82f3ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:15:36.870765   20193 certs.go:333] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.crt
	I0222 21:15:36.870934   20193 certs.go:337] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.key
	I0222 21:15:36.871108   20193 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.key
	I0222 21:15:36.871122   20193 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.crt with IP's: []
	I0222 21:15:36.994966   20193 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.crt ...
	I0222 21:15:36.994989   20193 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.crt: {Name:mkc6e16149a6bc147edd8ef6233ab7b33f706c25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:15:36.995280   20193 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.key ...
	I0222 21:15:36.995291   20193 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.key: {Name:mkc6f260b2ad104efb3c36d9964176f6124e8cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:15:36.995710   20193 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 21:15:36.995762   20193 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 21:15:36.995774   20193 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 21:15:36.995831   20193 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 21:15:36.995865   20193 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 21:15:36.995896   20193 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 21:15:36.995968   20193 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:15:36.996480   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 21:15:37.015172   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0222 21:15:37.032766   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 21:15:37.051037   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0222 21:15:37.068532   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 21:15:37.086953   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 21:15:37.104439   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 21:15:37.121800   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 21:15:37.139810   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 21:15:37.157607   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 21:15:37.175202   20193 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 21:15:37.197376   20193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 21:15:37.217841   20193 ssh_runner.go:195] Run: openssl version
	I0222 21:15:37.223860   20193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 21:15:37.232336   20193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:15:37.237636   20193 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:15:37.237707   20193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:15:37.243745   20193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 21:15:37.253221   20193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 21:15:37.263422   20193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 21:15:37.267618   20193 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 21:15:37.267691   20193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 21:15:37.273235   20193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 21:15:37.282250   20193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 21:15:37.294572   20193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 21:15:37.301189   20193 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 21:15:37.301281   20193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 21:15:37.309019   20193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 21:15:37.318099   20193 kubeadm.go:401] StartCluster: {Name:old-k8s-version-865000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-865000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:15:37.318212   20193 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:15:37.337550   20193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 21:15:37.346051   20193 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:15:37.353985   20193 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:15:37.354038   20193 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:15:37.361743   20193 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:15:37.361770   20193 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:15:37.414724   20193 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0222 21:15:37.414788   20193 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:15:37.589937   20193 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:15:37.590082   20193 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:15:37.590157   20193 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:15:37.760083   20193 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:15:37.760186   20193 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:15:37.768405   20193 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0222 21:15:37.838795   20193 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:15:37.858499   20193 out.go:204]   - Generating certificates and keys ...
	I0222 21:15:37.858636   20193 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:15:37.858749   20193 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:15:38.353342   20193 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0222 21:15:38.450899   20193 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0222 21:15:38.645331   20193 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0222 21:15:38.989232   20193 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0222 21:15:39.069801   20193 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0222 21:15:39.070237   20193 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-865000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0222 21:15:39.211743   20193 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0222 21:15:39.238275   20193 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-865000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0222 21:15:39.400846   20193 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0222 21:15:39.603025   20193 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0222 21:15:39.725878   20193 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0222 21:15:39.725945   20193 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:15:39.903063   20193 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:15:39.990179   20193 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:15:40.113714   20193 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:15:40.226142   20193 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:15:40.226737   20193 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:15:40.251245   20193 out.go:204]   - Booting up control plane ...
	I0222 21:15:40.251470   20193 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:15:40.251616   20193 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:15:40.251714   20193 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:15:40.251809   20193 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:15:40.251997   20193 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:16:20.235047   20193 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 21:16:20.235592   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:16:20.235792   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:16:25.236468   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:16:25.236610   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:16:35.237827   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:16:35.238104   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:16:55.239387   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:16:55.239629   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:17:35.240663   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:17:35.240878   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:17:35.240893   20193 kubeadm.go:322] 
	I0222 21:17:35.240931   20193 kubeadm.go:322] Unfortunately, an error has occurred:
	I0222 21:17:35.240972   20193 kubeadm.go:322] 	timed out waiting for the condition
	I0222 21:17:35.240981   20193 kubeadm.go:322] 
	I0222 21:17:35.241036   20193 kubeadm.go:322] This error is likely caused by:
	I0222 21:17:35.241093   20193 kubeadm.go:322] 	- The kubelet is not running
	I0222 21:17:35.241242   20193 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 21:17:35.241256   20193 kubeadm.go:322] 
	I0222 21:17:35.241388   20193 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 21:17:35.241424   20193 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0222 21:17:35.241456   20193 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0222 21:17:35.241464   20193 kubeadm.go:322] 
	I0222 21:17:35.241595   20193 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 21:17:35.241695   20193 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0222 21:17:35.241802   20193 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0222 21:17:35.241855   20193 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0222 21:17:35.241946   20193 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0222 21:17:35.241992   20193 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0222 21:17:35.244707   20193 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 21:17:35.244787   20193 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 21:17:35.244884   20193 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0222 21:17:35.244975   20193 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:17:35.245040   20193 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 21:17:35.245107   20193 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0222 21:17:35.245293   20193 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-865000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-865000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-865000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-865000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0222 21:17:35.245324   20193 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0222 21:17:35.656575   20193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:17:35.666448   20193 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:17:35.666511   20193 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:17:35.674161   20193 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:17:35.674183   20193 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:17:35.721182   20193 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0222 21:17:35.721232   20193 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:17:35.887406   20193 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:17:35.887494   20193 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:17:35.887562   20193 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:17:36.043376   20193 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:17:36.044020   20193 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:17:36.050600   20193 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0222 21:17:36.111918   20193 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:17:36.154305   20193 out.go:204]   - Generating certificates and keys ...
	I0222 21:17:36.154408   20193 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:17:36.154480   20193 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:17:36.154535   20193 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0222 21:17:36.154582   20193 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0222 21:17:36.154646   20193 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0222 21:17:36.154698   20193 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0222 21:17:36.154776   20193 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0222 21:17:36.154831   20193 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0222 21:17:36.154924   20193 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0222 21:17:36.154997   20193 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0222 21:17:36.155031   20193 kubeadm.go:322] [certs] Using the existing "sa" key
	I0222 21:17:36.155095   20193 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:17:36.201275   20193 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:17:36.326654   20193 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:17:36.380596   20193 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:17:36.449360   20193 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:17:36.450154   20193 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:17:36.471914   20193 out.go:204]   - Booting up control plane ...
	I0222 21:17:36.472074   20193 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:17:36.472272   20193 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:17:36.472409   20193 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:17:36.472571   20193 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:17:36.472842   20193 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:18:16.459206   20193 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 21:18:16.459570   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:18:16.459749   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:18:21.461453   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:18:21.461661   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:18:31.462567   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:18:31.462789   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:18:51.463987   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:18:51.464199   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:19:31.464054   20193 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:19:31.464211   20193 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:19:31.464222   20193 kubeadm.go:322] 
	I0222 21:19:31.464254   20193 kubeadm.go:322] Unfortunately, an error has occurred:
	I0222 21:19:31.464293   20193 kubeadm.go:322] 	timed out waiting for the condition
	I0222 21:19:31.464298   20193 kubeadm.go:322] 
	I0222 21:19:31.464338   20193 kubeadm.go:322] This error is likely caused by:
	I0222 21:19:31.464370   20193 kubeadm.go:322] 	- The kubelet is not running
	I0222 21:19:31.464460   20193 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 21:19:31.464475   20193 kubeadm.go:322] 
	I0222 21:19:31.464570   20193 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 21:19:31.464594   20193 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0222 21:19:31.464617   20193 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0222 21:19:31.464621   20193 kubeadm.go:322] 
	I0222 21:19:31.464690   20193 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 21:19:31.464768   20193 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0222 21:19:31.464833   20193 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0222 21:19:31.464873   20193 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0222 21:19:31.464941   20193 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0222 21:19:31.464974   20193 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0222 21:19:31.467750   20193 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 21:19:31.467809   20193 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 21:19:31.467919   20193 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0222 21:19:31.468014   20193 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:19:31.468096   20193 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 21:19:31.468152   20193 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0222 21:19:31.468177   20193 kubeadm.go:403] StartCluster complete in 3m54.153160044s
	I0222 21:19:31.468272   20193 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:19:31.488197   20193 logs.go:278] 0 containers: []
	W0222 21:19:31.488210   20193 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:19:31.488282   20193 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:19:31.508265   20193 logs.go:278] 0 containers: []
	W0222 21:19:31.508278   20193 logs.go:280] No container was found matching "etcd"
	I0222 21:19:31.508353   20193 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:19:31.527185   20193 logs.go:278] 0 containers: []
	W0222 21:19:31.527197   20193 logs.go:280] No container was found matching "coredns"
	I0222 21:19:31.527272   20193 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:19:31.546608   20193 logs.go:278] 0 containers: []
	W0222 21:19:31.546621   20193 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:19:31.546686   20193 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:19:31.565326   20193 logs.go:278] 0 containers: []
	W0222 21:19:31.565339   20193 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:19:31.565407   20193 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:19:31.584803   20193 logs.go:278] 0 containers: []
	W0222 21:19:31.584816   20193 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:19:31.584884   20193 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:19:31.603671   20193 logs.go:278] 0 containers: []
	W0222 21:19:31.603684   20193 logs.go:280] No container was found matching "kindnet"
	I0222 21:19:31.603750   20193 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:19:31.623704   20193 logs.go:278] 0 containers: []
	W0222 21:19:31.623718   20193 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:19:31.623726   20193 logs.go:124] Gathering logs for dmesg ...
	I0222 21:19:31.623733   20193 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:19:31.635990   20193 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:19:31.636003   20193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:19:31.691327   20193 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:19:31.691339   20193 logs.go:124] Gathering logs for Docker ...
	I0222 21:19:31.691346   20193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:19:31.716520   20193 logs.go:124] Gathering logs for container status ...
	I0222 21:19:31.716535   20193 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:19:33.762783   20193 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046263442s)
	I0222 21:19:33.762892   20193 logs.go:124] Gathering logs for kubelet ...
	I0222 21:19:33.762899   20193 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0222 21:19:33.799731   20193 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0222 21:19:33.799750   20193 out.go:239] * 
	* 
	W0222 21:19:33.799864   20193 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 21:19:33.799876   20193 out.go:239] * 
	* 
	W0222 21:19:33.800676   20193 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0222 21:19:33.864057   20193 out.go:177] 
	W0222 21:19:33.906277   20193 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 21:19:33.906338   20193 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0222 21:19:33.906377   20193 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0222 21:19:33.948226   20193 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-865000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-865000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-865000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c",
	        "Created": "2023-02-23T05:15:31.417090555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272975,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T05:15:31.720510028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hosts",
	        "LogPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c-json.log",
	        "Name": "/old-k8s-version-865000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-865000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-865000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-865000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-865000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-865000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "846d4edb0dc70a09bd7bcf368401555188a079858000e0bc8c50de2b0547be2c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/846d4edb0dc7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-865000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "106a8b195383",
	                        "old-k8s-version-865000"
	                    ],
	                    "NetworkID": "947893b68cb410e9e5982aa5b8afeae1844c1ff30155168ea70efca5bffdb638",
	                    "EndpointID": "33790e86f4bb325b05c03827ee43602ee09867aed1cb461cd243797f092d3a80",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 6 (414.055276ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 21:19:34.512263   21353 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-865000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-865000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (251.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-865000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-865000 create -f testdata/busybox.yaml: exit status 1 (36.470399ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-865000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-865000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-865000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-865000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c",
	        "Created": "2023-02-23T05:15:31.417090555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272975,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T05:15:31.720510028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hosts",
	        "LogPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c-json.log",
	        "Name": "/old-k8s-version-865000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-865000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-865000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-865000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-865000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-865000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "846d4edb0dc70a09bd7bcf368401555188a079858000e0bc8c50de2b0547be2c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/846d4edb0dc7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-865000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "106a8b195383",
	                        "old-k8s-version-865000"
	                    ],
	                    "NetworkID": "947893b68cb410e9e5982aa5b8afeae1844c1ff30155168ea70efca5bffdb638",
	                    "EndpointID": "33790e86f4bb325b05c03827ee43602ee09867aed1cb461cd243797f092d3a80",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 6 (400.187774ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 21:19:35.008527   21366 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-865000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-865000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-865000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-865000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c",
	        "Created": "2023-02-23T05:15:31.417090555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272975,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T05:15:31.720510028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hosts",
	        "LogPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c-json.log",
	        "Name": "/old-k8s-version-865000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-865000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-865000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-865000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-865000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-865000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "846d4edb0dc70a09bd7bcf368401555188a079858000e0bc8c50de2b0547be2c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/846d4edb0dc7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-865000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "106a8b195383",
	                        "old-k8s-version-865000"
	                    ],
	                    "NetworkID": "947893b68cb410e9e5982aa5b8afeae1844c1ff30155168ea70efca5bffdb638",
	                    "EndpointID": "33790e86f4bb325b05c03827ee43602ee09867aed1cb461cd243797f092d3a80",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 6 (403.548614ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 21:19:35.472524   21380 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-865000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-865000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (108.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-865000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0222 21:19:43.087858    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 21:19:45.140686    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:45.147154    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:45.159426    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:45.180518    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:45.220673    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:45.301171    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:45.463163    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:45.784738    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:46.425263    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:47.707539    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:49.090876    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:50.269366    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:19:55.389722    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:20:03.158616    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 21:20:05.630005    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:20:17.985330    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:20:26.110523    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:20:30.052196    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:20:44.084883    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:44.091283    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:44.103521    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:44.125782    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:44.166970    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:44.247083    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:44.407279    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:44.727638    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:45.368556    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:46.648942    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:49.209488    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:54.278985    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:20:54.329617    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:20:55.695577    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:20:59.525029    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:21:04.570630    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:21:07.070413    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:21:21.971106    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-865000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m48.128440189s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-865000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-865000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-865000 describe deploy/metrics-server -n kube-system: exit status 1 (36.611544ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-865000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-865000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-865000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-865000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c",
	        "Created": "2023-02-23T05:15:31.417090555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272975,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T05:15:31.720510028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hosts",
	        "LogPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c-json.log",
	        "Name": "/old-k8s-version-865000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-865000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-865000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-865000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-865000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-865000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "846d4edb0dc70a09bd7bcf368401555188a079858000e0bc8c50de2b0547be2c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/846d4edb0dc7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-865000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "106a8b195383",
	                        "old-k8s-version-865000"
	                    ],
	                    "NetworkID": "947893b68cb410e9e5982aa5b8afeae1844c1ff30155168ea70efca5bffdb638",
	                    "EndpointID": "33790e86f4bb325b05c03827ee43602ee09867aed1cb461cd243797f092d3a80",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 6 (398.591189ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 21:21:24.095664   21497 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-865000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-865000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (108.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (497.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-865000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0222 21:21:42.491490    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:21:51.971365    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:22:06.013823    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:22:10.180035    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:22:19.856200    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:22:28.990206    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:22:34.130635    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-865000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m11.940359629s)

                                                
                                                
-- stdout --
	* [old-k8s-version-865000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-865000 in cluster old-k8s-version-865000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-865000" ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 21:21:26.103395   21529 out.go:296] Setting OutFile to fd 1 ...
	I0222 21:21:26.103562   21529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:21:26.103567   21529 out.go:309] Setting ErrFile to fd 2...
	I0222 21:21:26.103571   21529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:21:26.103677   21529 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 21:21:26.105140   21529 out.go:303] Setting JSON to false
	I0222 21:21:26.123849   21529 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4861,"bootTime":1677124825,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 21:21:26.123926   21529 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 21:21:26.145520   21529 out.go:177] * [old-k8s-version-865000] minikube v1.29.0 on Darwin 13.2
	I0222 21:21:26.187586   21529 notify.go:220] Checking for updates...
	I0222 21:21:26.187600   21529 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 21:21:26.209604   21529 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:21:26.231738   21529 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 21:21:26.253514   21529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 21:21:26.274461   21529 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 21:21:26.295695   21529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 21:21:26.318073   21529 config.go:182] Loaded profile config "old-k8s-version-865000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0222 21:21:26.340333   21529 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0222 21:21:26.361305   21529 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 21:21:26.422833   21529 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 21:21:26.422968   21529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:21:26.581709   21529 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:21:26.473934858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:21:26.623908   21529 out.go:177] * Using the docker driver based on existing profile
	I0222 21:21:26.644913   21529 start.go:296] selected driver: docker
	I0222 21:21:26.644930   21529 start.go:857] validating driver "docker" against &{Name:old-k8s-version-865000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-865000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:21:26.644993   21529 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 21:21:26.647556   21529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:21:26.797192   21529 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:21:26.703367494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:21:26.797376   21529 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0222 21:21:26.797401   21529 cni.go:84] Creating CNI manager for ""
	I0222 21:21:26.797413   21529 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0222 21:21:26.797421   21529 start_flags.go:319] config:
	{Name:old-k8s-version-865000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-865000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:21:26.840175   21529 out.go:177] * Starting control plane node old-k8s-version-865000 in cluster old-k8s-version-865000
	I0222 21:21:26.862023   21529 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 21:21:26.883960   21529 out.go:177] * Pulling base image ...
	I0222 21:21:26.926258   21529 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 21:21:26.926340   21529 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 21:21:26.926360   21529 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0222 21:21:26.926378   21529 cache.go:57] Caching tarball of preloaded images
	I0222 21:21:26.927183   21529 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 21:21:26.927365   21529 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0222 21:21:26.927892   21529 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/config.json ...
	I0222 21:21:26.983652   21529 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 21:21:26.983695   21529 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 21:21:26.983715   21529 cache.go:193] Successfully downloaded all kic artifacts
	I0222 21:21:26.983763   21529 start.go:364] acquiring machines lock for old-k8s-version-865000: {Name:mk6cb26d76424f531c6197cd66b272c20668b8f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 21:21:26.983885   21529 start.go:368] acquired machines lock for "old-k8s-version-865000" in 102.309µs
	I0222 21:21:26.983911   21529 start.go:96] Skipping create...Using existing machine configuration
	I0222 21:21:26.983919   21529 fix.go:55] fixHost starting: 
	I0222 21:21:26.984151   21529 cli_runner.go:164] Run: docker container inspect old-k8s-version-865000 --format={{.State.Status}}
	I0222 21:21:27.041390   21529 fix.go:103] recreateIfNeeded on old-k8s-version-865000: state=Stopped err=<nil>
	W0222 21:21:27.041435   21529 fix.go:129] unexpected machine state, will restart: <nil>
	I0222 21:21:27.063062   21529 out.go:177] * Restarting existing docker container for "old-k8s-version-865000" ...
	I0222 21:21:27.084072   21529 cli_runner.go:164] Run: docker start old-k8s-version-865000
	I0222 21:21:27.419938   21529 cli_runner.go:164] Run: docker container inspect old-k8s-version-865000 --format={{.State.Status}}
	I0222 21:21:27.483510   21529 kic.go:426] container "old-k8s-version-865000" state is running.
	I0222 21:21:27.484163   21529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-865000
	I0222 21:21:27.553222   21529 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/config.json ...
	I0222 21:21:27.553644   21529 machine.go:88] provisioning docker machine ...
	I0222 21:21:27.553690   21529 ubuntu.go:169] provisioning hostname "old-k8s-version-865000"
	I0222 21:21:27.553775   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:27.625107   21529 main.go:141] libmachine: Using SSH client type: native
	I0222 21:21:27.625528   21529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54722 <nil> <nil>}
	I0222 21:21:27.625542   21529 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-865000 && echo "old-k8s-version-865000" | sudo tee /etc/hostname
	I0222 21:21:27.775136   21529 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-865000
	
	I0222 21:21:27.775221   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:27.837838   21529 main.go:141] libmachine: Using SSH client type: native
	I0222 21:21:27.838192   21529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54722 <nil> <nil>}
	I0222 21:21:27.838205   21529 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-865000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-865000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-865000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 21:21:27.978527   21529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:21:27.978555   21529 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 21:21:27.978576   21529 ubuntu.go:177] setting up certificates
	I0222 21:21:27.978584   21529 provision.go:83] configureAuth start
	I0222 21:21:27.978655   21529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-865000
	I0222 21:21:28.037093   21529 provision.go:138] copyHostCerts
	I0222 21:21:28.037195   21529 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 21:21:28.037206   21529 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 21:21:28.037300   21529 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 21:21:28.037517   21529 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 21:21:28.037523   21529 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 21:21:28.037589   21529 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 21:21:28.037734   21529 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 21:21:28.037740   21529 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 21:21:28.037806   21529 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 21:21:28.037925   21529 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-865000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-865000]
	I0222 21:21:28.107727   21529 provision.go:172] copyRemoteCerts
	I0222 21:21:28.107785   21529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 21:21:28.107837   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:28.165736   21529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54722 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:21:28.260341   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 21:21:28.277849   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0222 21:21:28.295188   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0222 21:21:28.312304   21529 provision.go:86] duration metric: configureAuth took 333.710929ms
	I0222 21:21:28.312317   21529 ubuntu.go:193] setting minikube options for container-runtime
	I0222 21:21:28.312472   21529 config.go:182] Loaded profile config "old-k8s-version-865000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0222 21:21:28.312541   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:28.370614   21529 main.go:141] libmachine: Using SSH client type: native
	I0222 21:21:28.370955   21529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54722 <nil> <nil>}
	I0222 21:21:28.370964   21529 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 21:21:28.501989   21529 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 21:21:28.502002   21529 ubuntu.go:71] root file system type: overlay
	I0222 21:21:28.502113   21529 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 21:21:28.502195   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:28.560182   21529 main.go:141] libmachine: Using SSH client type: native
	I0222 21:21:28.560567   21529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54722 <nil> <nil>}
	I0222 21:21:28.560621   21529 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 21:21:28.703416   21529 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 21:21:28.703522   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:28.762318   21529 main.go:141] libmachine: Using SSH client type: native
	I0222 21:21:28.762676   21529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54722 <nil> <nil>}
	I0222 21:21:28.762689   21529 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 21:21:28.901707   21529 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:21:28.901724   21529 machine.go:91] provisioned docker machine in 1.348089824s
	I0222 21:21:28.901735   21529 start.go:300] post-start starting for "old-k8s-version-865000" (driver="docker")
	I0222 21:21:28.901740   21529 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 21:21:28.901812   21529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 21:21:28.901867   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:28.959310   21529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54722 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:21:29.054435   21529 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 21:21:29.057998   21529 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 21:21:29.058015   21529 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 21:21:29.058028   21529 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 21:21:29.058033   21529 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 21:21:29.058040   21529 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 21:21:29.058136   21529 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 21:21:29.058308   21529 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 21:21:29.058495   21529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 21:21:29.065851   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:21:29.083169   21529 start.go:303] post-start completed in 181.421167ms
	I0222 21:21:29.083245   21529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 21:21:29.083305   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:29.142285   21529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54722 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:21:29.234942   21529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 21:21:29.239504   21529 fix.go:57] fixHost completed within 2.255610534s
	I0222 21:21:29.239526   21529 start.go:83] releasing machines lock for "old-k8s-version-865000", held for 2.255662062s
	I0222 21:21:29.239623   21529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-865000
	I0222 21:21:29.297535   21529 ssh_runner.go:195] Run: cat /version.json
	I0222 21:21:29.297568   21529 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0222 21:21:29.297606   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:29.297652   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:29.364154   21529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54722 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:21:29.364337   21529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54722 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/old-k8s-version-865000/id_rsa Username:docker}
	I0222 21:21:29.723631   21529 ssh_runner.go:195] Run: systemctl --version
	I0222 21:21:29.728406   21529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0222 21:21:29.733030   21529 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0222 21:21:29.733087   21529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0222 21:21:29.740732   21529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0222 21:21:29.748177   21529 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0222 21:21:29.748197   21529 start.go:485] detecting cgroup driver to use...
	I0222 21:21:29.748207   21529 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:21:29.748278   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:21:29.761362   21529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0222 21:21:29.770430   21529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 21:21:29.779710   21529 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 21:21:29.779774   21529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 21:21:29.788845   21529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:21:29.797227   21529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 21:21:29.805920   21529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:21:29.814613   21529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 21:21:29.822498   21529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 21:21:29.830843   21529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 21:21:29.838220   21529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 21:21:29.845451   21529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:21:29.913909   21529 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 21:21:29.982984   21529 start.go:485] detecting cgroup driver to use...
	I0222 21:21:29.983007   21529 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:21:29.983073   21529 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 21:21:29.994527   21529 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 21:21:29.994593   21529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 21:21:30.005962   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:21:30.021363   21529 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 21:21:30.091582   21529 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 21:21:30.180167   21529 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 21:21:30.180186   21529 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 21:21:30.193717   21529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:21:30.279744   21529 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 21:21:30.515228   21529 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:21:30.541707   21529 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:21:30.609896   21529 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0222 21:21:30.610115   21529 cli_runner.go:164] Run: docker exec -t old-k8s-version-865000 dig +short host.docker.internal
	I0222 21:21:30.726061   21529 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 21:21:30.726167   21529 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 21:21:30.730663   21529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:21:30.741423   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:30.801956   21529 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 21:21:30.802027   21529 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:21:30.823445   21529 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0222 21:21:30.823462   21529 docker.go:560] Images already preloaded, skipping extraction
	I0222 21:21:30.823544   21529 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:21:30.843728   21529 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0222 21:21:30.843755   21529 cache_images.go:84] Images are preloaded, skipping loading
	I0222 21:21:30.843852   21529 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 21:21:30.870215   21529 cni.go:84] Creating CNI manager for ""
	I0222 21:21:30.870233   21529 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0222 21:21:30.870248   21529 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 21:21:30.870268   21529 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-865000 NodeName:old-k8s-version-865000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 21:21:30.870376   21529 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-865000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-865000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 21:21:30.870457   21529 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-865000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-865000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 21:21:30.870523   21529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0222 21:21:30.879701   21529 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 21:21:30.879802   21529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 21:21:30.887515   21529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0222 21:21:30.900369   21529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 21:21:30.914000   21529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0222 21:21:30.927306   21529 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0222 21:21:30.931337   21529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:21:30.942078   21529 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000 for IP: 192.168.76.2
	I0222 21:21:30.942097   21529 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:21:30.942272   21529 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 21:21:30.942336   21529 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 21:21:30.942432   21529 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/client.key
	I0222 21:21:30.942528   21529 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.key.31bdca25
	I0222 21:21:30.942587   21529 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.key
	I0222 21:21:30.942816   21529 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 21:21:30.942860   21529 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 21:21:30.942871   21529 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 21:21:30.942905   21529 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 21:21:30.942943   21529 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 21:21:30.942973   21529 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 21:21:30.943046   21529 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:21:30.943624   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 21:21:30.961526   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0222 21:21:30.979083   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 21:21:30.997349   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/old-k8s-version-865000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0222 21:21:31.014854   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 21:21:31.033824   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 21:21:31.051562   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 21:21:31.068926   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 21:21:31.086206   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 21:21:31.103601   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 21:21:31.144587   21529 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 21:21:31.162113   21529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 21:21:31.175198   21529 ssh_runner.go:195] Run: openssl version
	I0222 21:21:31.180589   21529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 21:21:31.188883   21529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 21:21:31.192993   21529 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 21:21:31.193046   21529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 21:21:31.198511   21529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 21:21:31.206356   21529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 21:21:31.214331   21529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:21:31.218115   21529 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:21:31.218164   21529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:21:31.223834   21529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 21:21:31.231757   21529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 21:21:31.239999   21529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 21:21:31.244085   21529 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 21:21:31.244141   21529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 21:21:31.249582   21529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 21:21:31.257423   21529 kubeadm.go:401] StartCluster: {Name:old-k8s-version-865000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-865000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:21:31.257615   21529 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:21:31.277869   21529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 21:21:31.285819   21529 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0222 21:21:31.285834   21529 kubeadm.go:633] restartCluster start
	I0222 21:21:31.285885   21529 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0222 21:21:31.293096   21529 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:31.293204   21529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-865000
	I0222 21:21:31.352731   21529 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-865000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:21:31.352901   21529 kubeconfig.go:146] "old-k8s-version-865000" context is missing from /Users/jenkins/minikube-integration/15909-2664/kubeconfig - will repair!
	I0222 21:21:31.353232   21529 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:21:31.354572   21529 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0222 21:21:31.362916   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:31.362996   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:31.372688   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:31.872928   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:31.873085   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:31.882612   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:32.373684   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:32.373865   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:32.384661   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:32.874800   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:32.874965   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:32.885636   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:33.373467   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:33.373633   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:33.384753   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:33.872760   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:33.872889   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:33.882396   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:34.374771   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:34.374967   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:34.385936   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:34.872977   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:34.873154   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:34.884239   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:35.374801   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:35.374957   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:35.386145   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:35.873505   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:35.873690   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:35.884854   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:36.374693   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:36.374823   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:36.385009   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:36.873100   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:36.873222   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:36.883076   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:37.373666   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:37.373825   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:37.384529   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:37.873193   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:37.873299   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:37.884229   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:38.374701   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:38.374949   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:38.386241   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:38.874736   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:38.874936   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:38.886230   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:39.373386   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:39.373529   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:39.384407   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:39.873036   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:39.873183   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:39.884095   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:40.373558   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:40.373763   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:40.384144   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:40.872624   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:40.872718   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:40.882154   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:41.372860   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:41.373009   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:41.383837   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:41.383848   21529 api_server.go:165] Checking apiserver status ...
	I0222 21:21:41.383902   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:21:41.392485   21529 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:21:41.392498   21529 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0222 21:21:41.392506   21529 kubeadm.go:1120] stopping kube-system containers ...
	I0222 21:21:41.392576   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:21:41.411422   21529 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0222 21:21:41.422217   21529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:21:41.430053   21529 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 23 05:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 23 05:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 23 05:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb 23 05:17 /etc/kubernetes/scheduler.conf
	
	I0222 21:21:41.430119   21529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0222 21:21:41.437460   21529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0222 21:21:41.444967   21529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0222 21:21:41.452579   21529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0222 21:21:41.459904   21529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:21:41.467671   21529 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0222 21:21:41.467684   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:21:41.519603   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:21:41.906365   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:21:42.067869   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:21:42.133190   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:21:42.193510   21529 api_server.go:51] waiting for apiserver process to appear ...
	I0222 21:21:42.193578   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:42.703294   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:43.203275   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:43.702597   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:44.203065   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:44.704162   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:45.202485   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:45.703158   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:46.202521   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:46.702502   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:47.203512   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:47.704452   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:48.202767   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:48.702341   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:49.203129   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:49.702378   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:50.203298   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:50.702802   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:51.202878   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:51.702916   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:52.203837   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:52.704410   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:53.203312   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:53.702623   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:54.202368   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:54.702429   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:55.202556   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:55.702616   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:56.202700   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:56.702340   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:57.203079   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:57.704367   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:58.202493   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:58.703528   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:59.202279   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:21:59.702541   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:00.203198   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:00.702534   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:01.202162   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:01.703411   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:02.203256   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:02.704169   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:03.203058   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:03.702141   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:04.202269   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:04.703163   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:05.202977   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:05.704245   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:06.202156   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:06.703668   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:07.202838   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:07.702889   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:08.202981   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:08.703003   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:09.202182   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:09.702261   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:10.202242   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:10.702462   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:11.202212   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:11.703466   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:12.202070   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:12.701985   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:13.202382   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:13.702039   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:14.202154   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:14.701992   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:15.201975   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:15.702029   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:16.201964   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:16.702471   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:17.202532   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:17.702437   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:18.202506   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:18.702291   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:19.203578   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:19.703885   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:20.201922   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:20.703016   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:21.203106   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:21.703825   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:22.203964   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:22.702329   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:23.203949   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:23.702275   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:24.202198   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:24.701866   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:25.202051   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:25.703875   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:26.203819   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:26.702787   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:27.202930   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:27.702275   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:28.202757   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:28.701950   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:29.201815   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:29.702989   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:30.203788   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:30.701941   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:31.202678   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:31.703491   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:32.203572   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:32.702267   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:33.202335   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:33.703339   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:34.201789   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:34.702151   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:35.202067   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:35.702053   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:36.202371   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:36.702654   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:37.201984   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:37.701803   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:38.201963   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:38.703798   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:39.201916   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:39.701958   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:40.202881   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:40.702118   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:41.201642   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:41.703003   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:42.201760   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:22:42.222985   21529 logs.go:278] 0 containers: []
	W0222 21:22:42.222998   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:22:42.223066   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:22:42.244050   21529 logs.go:278] 0 containers: []
	W0222 21:22:42.244067   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:22:42.244163   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:22:42.263856   21529 logs.go:278] 0 containers: []
	W0222 21:22:42.263871   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:22:42.263948   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:22:42.283151   21529 logs.go:278] 0 containers: []
	W0222 21:22:42.283166   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:22:42.283235   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:22:42.306434   21529 logs.go:278] 0 containers: []
	W0222 21:22:42.306448   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:22:42.306517   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:22:42.327273   21529 logs.go:278] 0 containers: []
	W0222 21:22:42.327286   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:22:42.327356   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:22:42.347812   21529 logs.go:278] 0 containers: []
	W0222 21:22:42.347829   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:22:42.347911   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:22:42.367683   21529 logs.go:278] 0 containers: []
	W0222 21:22:42.367696   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:22:42.367772   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:22:42.391973   21529 logs.go:278] 0 containers: []
	W0222 21:22:42.391988   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:22:42.391996   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:22:42.392003   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:22:42.433189   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:22:42.433206   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:22:42.446989   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:22:42.447015   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:22:42.516503   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:22:42.516537   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:22:42.516550   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:22:42.560501   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:22:42.560522   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:22:44.620531   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060025507s)
	I0222 21:22:47.121424   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:47.202617   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:22:47.222891   21529 logs.go:278] 0 containers: []
	W0222 21:22:47.222904   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:22:47.222977   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:22:47.243622   21529 logs.go:278] 0 containers: []
	W0222 21:22:47.243636   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:22:47.243705   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:22:47.264786   21529 logs.go:278] 0 containers: []
	W0222 21:22:47.264803   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:22:47.264896   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:22:47.284070   21529 logs.go:278] 0 containers: []
	W0222 21:22:47.284083   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:22:47.284156   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:22:47.303388   21529 logs.go:278] 0 containers: []
	W0222 21:22:47.303404   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:22:47.303485   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:22:47.324725   21529 logs.go:278] 0 containers: []
	W0222 21:22:47.324738   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:22:47.324840   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:22:47.349173   21529 logs.go:278] 0 containers: []
	W0222 21:22:47.349197   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:22:47.349275   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:22:47.369659   21529 logs.go:278] 0 containers: []
	W0222 21:22:47.369674   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:22:47.369755   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:22:47.390146   21529 logs.go:278] 0 containers: []
	W0222 21:22:47.390160   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:22:47.390170   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:22:47.390183   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:22:47.455897   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:22:47.455911   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:22:47.455919   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:22:47.478550   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:22:47.478570   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:22:49.526459   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047905693s)
	I0222 21:22:49.526618   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:22:49.526627   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:22:49.577164   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:22:49.577185   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:22:52.092813   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:52.202040   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:22:52.222066   21529 logs.go:278] 0 containers: []
	W0222 21:22:52.222081   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:22:52.222161   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:22:52.243878   21529 logs.go:278] 0 containers: []
	W0222 21:22:52.243892   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:22:52.243964   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:22:52.269177   21529 logs.go:278] 0 containers: []
	W0222 21:22:52.269192   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:22:52.269277   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:22:52.291243   21529 logs.go:278] 0 containers: []
	W0222 21:22:52.291257   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:22:52.291354   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:22:52.310697   21529 logs.go:278] 0 containers: []
	W0222 21:22:52.310711   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:22:52.310790   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:22:52.330217   21529 logs.go:278] 0 containers: []
	W0222 21:22:52.330230   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:22:52.330306   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:22:52.350995   21529 logs.go:278] 0 containers: []
	W0222 21:22:52.351007   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:22:52.351075   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:22:52.370803   21529 logs.go:278] 0 containers: []
	W0222 21:22:52.370818   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:22:52.370894   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:22:52.390862   21529 logs.go:278] 0 containers: []
	W0222 21:22:52.390875   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:22:52.390882   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:22:52.390889   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:22:52.432290   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:22:52.432307   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:22:52.445261   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:22:52.445275   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:22:52.500481   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:22:52.500495   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:22:52.500502   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:22:52.523306   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:22:52.523322   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:22:54.574446   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051138751s)
	I0222 21:22:57.074710   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:22:57.203419   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:22:57.225249   21529 logs.go:278] 0 containers: []
	W0222 21:22:57.225270   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:22:57.225388   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:22:57.249897   21529 logs.go:278] 0 containers: []
	W0222 21:22:57.249945   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:22:57.250090   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:22:57.269217   21529 logs.go:278] 0 containers: []
	W0222 21:22:57.269231   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:22:57.269297   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:22:57.288261   21529 logs.go:278] 0 containers: []
	W0222 21:22:57.288274   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:22:57.288341   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:22:57.307931   21529 logs.go:278] 0 containers: []
	W0222 21:22:57.307945   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:22:57.308017   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:22:57.327317   21529 logs.go:278] 0 containers: []
	W0222 21:22:57.327332   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:22:57.327404   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:22:57.347779   21529 logs.go:278] 0 containers: []
	W0222 21:22:57.347797   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:22:57.347868   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:22:57.366983   21529 logs.go:278] 0 containers: []
	W0222 21:22:57.366997   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:22:57.367068   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:22:57.387403   21529 logs.go:278] 0 containers: []
	W0222 21:22:57.387417   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:22:57.387425   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:22:57.387432   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:22:57.399645   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:22:57.399665   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:22:57.456865   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:22:57.456876   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:22:57.456883   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:22:57.479146   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:22:57.479165   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:22:59.523017   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043866818s)
	I0222 21:22:59.523132   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:22:59.523139   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:02.072716   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:02.203305   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:02.225603   21529 logs.go:278] 0 containers: []
	W0222 21:23:02.225617   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:02.225679   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:02.247120   21529 logs.go:278] 0 containers: []
	W0222 21:23:02.247134   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:02.247201   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:02.270031   21529 logs.go:278] 0 containers: []
	W0222 21:23:02.270045   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:02.270105   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:02.291896   21529 logs.go:278] 0 containers: []
	W0222 21:23:02.291911   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:02.291975   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:02.314257   21529 logs.go:278] 0 containers: []
	W0222 21:23:02.314271   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:02.314370   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:02.341484   21529 logs.go:278] 0 containers: []
	W0222 21:23:02.341519   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:02.341610   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:02.364783   21529 logs.go:278] 0 containers: []
	W0222 21:23:02.364797   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:02.364877   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:02.385532   21529 logs.go:278] 0 containers: []
	W0222 21:23:02.385545   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:02.385637   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:02.437155   21529 logs.go:278] 0 containers: []
	W0222 21:23:02.437169   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:02.437179   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:02.437186   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:02.462590   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:02.462607   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:04.511789   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049196391s)
	I0222 21:23:04.511907   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:04.511918   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:04.554919   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:04.554942   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:04.569235   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:04.569252   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:04.634414   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:07.134917   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:07.201921   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:07.223918   21529 logs.go:278] 0 containers: []
	W0222 21:23:07.223931   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:07.224006   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:07.245336   21529 logs.go:278] 0 containers: []
	W0222 21:23:07.245350   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:07.245418   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:07.267869   21529 logs.go:278] 0 containers: []
	W0222 21:23:07.267886   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:07.267959   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:07.290898   21529 logs.go:278] 0 containers: []
	W0222 21:23:07.290913   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:07.290981   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:07.312475   21529 logs.go:278] 0 containers: []
	W0222 21:23:07.312487   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:07.312561   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:07.337892   21529 logs.go:278] 0 containers: []
	W0222 21:23:07.337913   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:07.337989   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:07.359055   21529 logs.go:278] 0 containers: []
	W0222 21:23:07.359069   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:07.359149   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:07.382170   21529 logs.go:278] 0 containers: []
	W0222 21:23:07.382191   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:07.382264   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:07.404928   21529 logs.go:278] 0 containers: []
	W0222 21:23:07.404942   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:07.404949   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:07.404957   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:07.418437   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:07.418459   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:07.482492   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:07.482505   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:07.482517   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:07.506765   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:07.506784   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:09.556545   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049772131s)
	I0222 21:23:09.556685   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:09.556697   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:12.102053   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:12.201271   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:12.226556   21529 logs.go:278] 0 containers: []
	W0222 21:23:12.226572   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:12.226665   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:12.256556   21529 logs.go:278] 0 containers: []
	W0222 21:23:12.256612   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:12.256730   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:12.285185   21529 logs.go:278] 0 containers: []
	W0222 21:23:12.285200   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:12.285288   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:12.310932   21529 logs.go:278] 0 containers: []
	W0222 21:23:12.310949   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:12.311065   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:12.338126   21529 logs.go:278] 0 containers: []
	W0222 21:23:12.338145   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:12.338230   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:12.364305   21529 logs.go:278] 0 containers: []
	W0222 21:23:12.364318   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:12.364398   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:12.392300   21529 logs.go:278] 0 containers: []
	W0222 21:23:12.392316   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:12.392390   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:12.418001   21529 logs.go:278] 0 containers: []
	W0222 21:23:12.418017   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:12.418118   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:12.444180   21529 logs.go:278] 0 containers: []
	W0222 21:23:12.444198   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:12.444209   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:12.444220   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:12.474220   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:12.474244   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:14.530069   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055837157s)
	I0222 21:23:14.530184   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:14.530206   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:14.575172   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:14.575192   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:14.589124   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:14.589146   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:14.655411   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:17.156394   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:17.201886   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:17.224619   21529 logs.go:278] 0 containers: []
	W0222 21:23:17.224634   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:17.224709   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:17.245902   21529 logs.go:278] 0 containers: []
	W0222 21:23:17.245918   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:17.245990   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:17.268431   21529 logs.go:278] 0 containers: []
	W0222 21:23:17.268462   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:17.268579   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:17.292415   21529 logs.go:278] 0 containers: []
	W0222 21:23:17.292432   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:17.292519   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:17.313728   21529 logs.go:278] 0 containers: []
	W0222 21:23:17.313744   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:17.314348   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:17.342825   21529 logs.go:278] 0 containers: []
	W0222 21:23:17.342841   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:17.342922   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:17.363955   21529 logs.go:278] 0 containers: []
	W0222 21:23:17.363968   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:17.364039   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:17.384875   21529 logs.go:278] 0 containers: []
	W0222 21:23:17.384890   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:17.384967   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:17.410952   21529 logs.go:278] 0 containers: []
	W0222 21:23:17.410969   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:17.410977   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:17.410987   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:17.476782   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:17.476795   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:17.476809   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:17.501162   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:17.501182   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:19.554311   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053142367s)
	I0222 21:23:19.554456   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:19.554467   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:19.597409   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:19.597432   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:22.111695   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:22.201756   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:22.223364   21529 logs.go:278] 0 containers: []
	W0222 21:23:22.223380   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:22.223456   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:22.245927   21529 logs.go:278] 0 containers: []
	W0222 21:23:22.245939   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:22.246013   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:22.267389   21529 logs.go:278] 0 containers: []
	W0222 21:23:22.267405   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:22.267483   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:22.288715   21529 logs.go:278] 0 containers: []
	W0222 21:23:22.288730   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:22.288805   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:22.310164   21529 logs.go:278] 0 containers: []
	W0222 21:23:22.310179   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:22.310254   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:22.331574   21529 logs.go:278] 0 containers: []
	W0222 21:23:22.331591   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:22.331674   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:22.354103   21529 logs.go:278] 0 containers: []
	W0222 21:23:22.354116   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:22.354188   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:22.375439   21529 logs.go:278] 0 containers: []
	W0222 21:23:22.375453   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:22.375529   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:22.397530   21529 logs.go:278] 0 containers: []
	W0222 21:23:22.397545   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:22.397553   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:22.397562   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:22.442657   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:22.442675   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:22.456038   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:22.456051   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:22.515381   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:22.515397   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:22.515404   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:22.538401   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:22.538418   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:24.593273   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05486833s)
	I0222 21:23:27.093519   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:27.201924   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:27.223059   21529 logs.go:278] 0 containers: []
	W0222 21:23:27.223072   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:27.223143   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:27.241694   21529 logs.go:278] 0 containers: []
	W0222 21:23:27.241708   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:27.241777   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:27.262037   21529 logs.go:278] 0 containers: []
	W0222 21:23:27.262052   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:27.262122   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:27.282087   21529 logs.go:278] 0 containers: []
	W0222 21:23:27.282100   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:27.282167   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:27.302265   21529 logs.go:278] 0 containers: []
	W0222 21:23:27.302284   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:27.302362   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:27.322455   21529 logs.go:278] 0 containers: []
	W0222 21:23:27.322468   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:27.322546   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:27.342129   21529 logs.go:278] 0 containers: []
	W0222 21:23:27.342143   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:27.342214   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:27.361541   21529 logs.go:278] 0 containers: []
	W0222 21:23:27.361555   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:27.361638   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:27.381643   21529 logs.go:278] 0 containers: []
	W0222 21:23:27.381656   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:27.381663   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:27.381670   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:27.423680   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:27.423695   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:27.436743   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:27.436757   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:27.492253   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:27.492270   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:27.492282   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:27.514592   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:27.514606   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:29.559978   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045386561s)
	I0222 21:23:32.060887   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:32.201601   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:32.223371   21529 logs.go:278] 0 containers: []
	W0222 21:23:32.223386   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:32.223463   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:32.243308   21529 logs.go:278] 0 containers: []
	W0222 21:23:32.243322   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:32.243392   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:32.262937   21529 logs.go:278] 0 containers: []
	W0222 21:23:32.262951   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:32.263023   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:32.280939   21529 logs.go:278] 0 containers: []
	W0222 21:23:32.280954   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:32.281034   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:32.306701   21529 logs.go:278] 0 containers: []
	W0222 21:23:32.306715   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:32.306791   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:32.329390   21529 logs.go:278] 0 containers: []
	W0222 21:23:32.329403   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:32.329481   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:32.353942   21529 logs.go:278] 0 containers: []
	W0222 21:23:32.353956   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:32.354030   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:32.376243   21529 logs.go:278] 0 containers: []
	W0222 21:23:32.376263   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:32.376397   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:32.398084   21529 logs.go:278] 0 containers: []
	W0222 21:23:32.398098   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:32.398108   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:32.398117   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:32.410304   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:32.410317   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:32.471132   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:32.471143   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:32.471151   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:32.494978   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:32.494997   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:34.549357   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054375004s)
	I0222 21:23:34.549475   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:34.549482   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:37.101825   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:37.201298   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:37.222089   21529 logs.go:278] 0 containers: []
	W0222 21:23:37.222103   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:37.222172   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:37.240503   21529 logs.go:278] 0 containers: []
	W0222 21:23:37.240517   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:37.240595   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:37.259535   21529 logs.go:278] 0 containers: []
	W0222 21:23:37.259550   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:37.259639   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:37.279663   21529 logs.go:278] 0 containers: []
	W0222 21:23:37.279676   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:37.279743   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:37.300880   21529 logs.go:278] 0 containers: []
	W0222 21:23:37.300896   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:37.300965   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:37.321005   21529 logs.go:278] 0 containers: []
	W0222 21:23:37.321019   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:37.321089   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:37.340971   21529 logs.go:278] 0 containers: []
	W0222 21:23:37.340984   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:37.341056   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:37.360423   21529 logs.go:278] 0 containers: []
	W0222 21:23:37.360437   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:37.360506   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:37.380909   21529 logs.go:278] 0 containers: []
	W0222 21:23:37.380922   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:37.380931   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:37.380938   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:37.421961   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:37.421976   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:37.434712   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:37.434727   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:37.492457   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:37.492471   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:37.492480   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:37.514924   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:37.514939   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:39.560051   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045125848s)
	I0222 21:23:42.060232   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:42.201973   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:42.222483   21529 logs.go:278] 0 containers: []
	W0222 21:23:42.222496   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:42.222566   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:42.241607   21529 logs.go:278] 0 containers: []
	W0222 21:23:42.241621   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:42.241693   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:42.262507   21529 logs.go:278] 0 containers: []
	W0222 21:23:42.262521   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:42.262591   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:42.281895   21529 logs.go:278] 0 containers: []
	W0222 21:23:42.281910   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:42.281979   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:42.301598   21529 logs.go:278] 0 containers: []
	W0222 21:23:42.301612   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:42.301684   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:42.321214   21529 logs.go:278] 0 containers: []
	W0222 21:23:42.321227   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:42.321305   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:42.341215   21529 logs.go:278] 0 containers: []
	W0222 21:23:42.341229   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:42.341305   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:42.360496   21529 logs.go:278] 0 containers: []
	W0222 21:23:42.360509   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:42.360577   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:42.381238   21529 logs.go:278] 0 containers: []
	W0222 21:23:42.381252   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:42.381260   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:42.381267   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:44.426962   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045709349s)
	I0222 21:23:44.427106   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:44.427116   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:44.465415   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:44.465449   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:44.478756   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:44.478771   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:44.534852   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:44.534863   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:44.534870   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:47.057675   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:47.202402   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:47.222101   21529 logs.go:278] 0 containers: []
	W0222 21:23:47.222114   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:47.222186   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:47.242382   21529 logs.go:278] 0 containers: []
	W0222 21:23:47.242396   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:47.242469   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:47.261765   21529 logs.go:278] 0 containers: []
	W0222 21:23:47.261778   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:47.261845   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:47.281277   21529 logs.go:278] 0 containers: []
	W0222 21:23:47.281291   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:47.281359   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:47.301347   21529 logs.go:278] 0 containers: []
	W0222 21:23:47.301360   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:47.301441   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:47.321903   21529 logs.go:278] 0 containers: []
	W0222 21:23:47.321917   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:47.322006   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:47.343361   21529 logs.go:278] 0 containers: []
	W0222 21:23:47.343379   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:47.343457   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:47.364433   21529 logs.go:278] 0 containers: []
	W0222 21:23:47.364446   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:47.364519   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:47.388360   21529 logs.go:278] 0 containers: []
	W0222 21:23:47.388374   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:47.388382   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:47.388389   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:47.430634   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:47.430649   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:47.443609   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:47.443623   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:47.500124   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:47.500137   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:47.500148   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:47.523675   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:47.523690   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:49.570089   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046413165s)
	I0222 21:23:52.070276   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:52.202737   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:52.222668   21529 logs.go:278] 0 containers: []
	W0222 21:23:52.222681   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:52.222753   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:52.245592   21529 logs.go:278] 0 containers: []
	W0222 21:23:52.245617   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:52.245744   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:52.278343   21529 logs.go:278] 0 containers: []
	W0222 21:23:52.278375   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:52.278503   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:52.298917   21529 logs.go:278] 0 containers: []
	W0222 21:23:52.298939   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:52.299019   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:52.317979   21529 logs.go:278] 0 containers: []
	W0222 21:23:52.317992   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:52.318063   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:52.348816   21529 logs.go:278] 0 containers: []
	W0222 21:23:52.348870   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:52.348955   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:52.377792   21529 logs.go:278] 0 containers: []
	W0222 21:23:52.377820   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:52.377910   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:52.397970   21529 logs.go:278] 0 containers: []
	W0222 21:23:52.397983   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:52.398053   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:52.417068   21529 logs.go:278] 0 containers: []
	W0222 21:23:52.417082   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:52.417089   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:52.417096   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:52.457845   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:52.457869   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:52.476289   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:52.476304   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:52.532249   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:52.532264   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:52.532274   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:52.560703   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:52.560723   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:54.621765   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061054563s)
	I0222 21:23:57.122156   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:23:57.200952   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:23:57.222579   21529 logs.go:278] 0 containers: []
	W0222 21:23:57.222593   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:23:57.222663   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:23:57.242970   21529 logs.go:278] 0 containers: []
	W0222 21:23:57.242985   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:23:57.243058   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:23:57.263151   21529 logs.go:278] 0 containers: []
	W0222 21:23:57.263164   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:23:57.263235   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:23:57.282593   21529 logs.go:278] 0 containers: []
	W0222 21:23:57.282607   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:23:57.282676   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:23:57.302365   21529 logs.go:278] 0 containers: []
	W0222 21:23:57.302378   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:23:57.302446   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:23:57.321748   21529 logs.go:278] 0 containers: []
	W0222 21:23:57.321762   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:23:57.321834   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:23:57.342360   21529 logs.go:278] 0 containers: []
	W0222 21:23:57.342374   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:23:57.342449   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:23:57.362085   21529 logs.go:278] 0 containers: []
	W0222 21:23:57.362099   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:23:57.362184   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:23:57.381786   21529 logs.go:278] 0 containers: []
	W0222 21:23:57.381800   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:23:57.381807   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:23:57.381817   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:23:57.422591   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:23:57.422606   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:23:57.435120   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:23:57.435134   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:23:57.490030   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:23:57.490042   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:23:57.490049   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:23:57.512778   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:23:57.512792   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:23:59.556316   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04353855s)
	I0222 21:24:02.056573   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:02.200767   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:02.221677   21529 logs.go:278] 0 containers: []
	W0222 21:24:02.221690   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:02.221762   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:02.241175   21529 logs.go:278] 0 containers: []
	W0222 21:24:02.241189   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:02.241265   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:02.261206   21529 logs.go:278] 0 containers: []
	W0222 21:24:02.261220   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:02.261293   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:02.281044   21529 logs.go:278] 0 containers: []
	W0222 21:24:02.281058   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:02.281127   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:02.301571   21529 logs.go:278] 0 containers: []
	W0222 21:24:02.301584   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:02.301660   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:02.324107   21529 logs.go:278] 0 containers: []
	W0222 21:24:02.324120   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:02.324191   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:02.346230   21529 logs.go:278] 0 containers: []
	W0222 21:24:02.346246   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:02.346325   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:02.367122   21529 logs.go:278] 0 containers: []
	W0222 21:24:02.367136   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:02.367206   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:02.388877   21529 logs.go:278] 0 containers: []
	W0222 21:24:02.388891   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:02.388904   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:02.388911   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:02.428535   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:02.428552   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:02.441228   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:02.441243   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:02.496761   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:02.496802   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:02.496823   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:02.519183   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:02.519201   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:04.566627   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047439914s)
	I0222 21:24:07.068945   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:07.200871   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:07.222587   21529 logs.go:278] 0 containers: []
	W0222 21:24:07.222602   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:07.222669   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:07.241729   21529 logs.go:278] 0 containers: []
	W0222 21:24:07.241752   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:07.241831   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:07.263257   21529 logs.go:278] 0 containers: []
	W0222 21:24:07.263270   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:07.263337   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:07.283168   21529 logs.go:278] 0 containers: []
	W0222 21:24:07.283182   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:07.283253   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:07.303987   21529 logs.go:278] 0 containers: []
	W0222 21:24:07.303999   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:07.304090   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:07.324168   21529 logs.go:278] 0 containers: []
	W0222 21:24:07.324182   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:07.324251   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:07.344597   21529 logs.go:278] 0 containers: []
	W0222 21:24:07.344610   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:07.344685   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:07.365240   21529 logs.go:278] 0 containers: []
	W0222 21:24:07.365253   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:07.365338   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:07.385321   21529 logs.go:278] 0 containers: []
	W0222 21:24:07.385335   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:07.385342   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:07.385358   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:07.425229   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:07.425246   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:07.438046   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:07.438061   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:07.496495   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:07.496506   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:07.496513   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:07.518019   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:07.518035   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:09.565396   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047375333s)
	I0222 21:24:12.065849   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:12.200869   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:12.221837   21529 logs.go:278] 0 containers: []
	W0222 21:24:12.221851   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:12.221929   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:12.241563   21529 logs.go:278] 0 containers: []
	W0222 21:24:12.241577   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:12.241649   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:12.261965   21529 logs.go:278] 0 containers: []
	W0222 21:24:12.261979   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:12.262052   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:12.281729   21529 logs.go:278] 0 containers: []
	W0222 21:24:12.281744   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:12.281810   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:12.300710   21529 logs.go:278] 0 containers: []
	W0222 21:24:12.300724   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:12.300796   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:12.319734   21529 logs.go:278] 0 containers: []
	W0222 21:24:12.319750   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:12.319831   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:12.339595   21529 logs.go:278] 0 containers: []
	W0222 21:24:12.339609   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:12.339680   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:12.358937   21529 logs.go:278] 0 containers: []
	W0222 21:24:12.358951   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:12.359024   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:12.378022   21529 logs.go:278] 0 containers: []
	W0222 21:24:12.378036   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:12.378043   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:12.378051   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:12.417973   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:12.417989   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:12.430543   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:12.430557   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:12.486049   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:12.486060   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:12.486068   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:12.513410   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:12.513432   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:14.559707   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046287249s)
	I0222 21:24:17.060055   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:17.200846   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:17.222134   21529 logs.go:278] 0 containers: []
	W0222 21:24:17.222163   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:17.222234   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:17.241400   21529 logs.go:278] 0 containers: []
	W0222 21:24:17.241415   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:17.241489   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:17.261710   21529 logs.go:278] 0 containers: []
	W0222 21:24:17.261726   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:17.261798   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:17.282734   21529 logs.go:278] 0 containers: []
	W0222 21:24:17.282748   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:17.282820   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:17.303493   21529 logs.go:278] 0 containers: []
	W0222 21:24:17.303506   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:17.303578   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:17.326746   21529 logs.go:278] 0 containers: []
	W0222 21:24:17.326763   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:17.326866   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:17.347063   21529 logs.go:278] 0 containers: []
	W0222 21:24:17.347082   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:17.347183   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:17.367896   21529 logs.go:278] 0 containers: []
	W0222 21:24:17.367910   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:17.367977   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:17.389011   21529 logs.go:278] 0 containers: []
	W0222 21:24:17.389024   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:17.389031   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:17.389039   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:17.431053   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:17.431068   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:17.444047   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:17.444060   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:17.499322   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:17.499335   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:17.499341   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:17.521985   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:17.522000   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:19.567470   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04548226s)
	I0222 21:24:22.068697   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:22.202473   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:22.223634   21529 logs.go:278] 0 containers: []
	W0222 21:24:22.223650   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:22.223723   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:22.242442   21529 logs.go:278] 0 containers: []
	W0222 21:24:22.242456   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:22.242525   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:22.262224   21529 logs.go:278] 0 containers: []
	W0222 21:24:22.262246   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:22.262313   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:22.283179   21529 logs.go:278] 0 containers: []
	W0222 21:24:22.283193   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:22.283263   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:22.302400   21529 logs.go:278] 0 containers: []
	W0222 21:24:22.302419   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:22.302490   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:22.322692   21529 logs.go:278] 0 containers: []
	W0222 21:24:22.322706   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:22.322773   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:22.341688   21529 logs.go:278] 0 containers: []
	W0222 21:24:22.341702   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:22.341770   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:22.361221   21529 logs.go:278] 0 containers: []
	W0222 21:24:22.361236   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:22.361305   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:22.381991   21529 logs.go:278] 0 containers: []
	W0222 21:24:22.382005   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:22.382013   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:22.382020   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:22.437978   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:22.438025   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:22.438032   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:22.460613   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:22.460629   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:24.508912   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048298388s)
	I0222 21:24:24.509026   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:24.509033   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:24.548257   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:24.548272   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:27.060593   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:27.200227   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:27.222130   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.222146   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:27.222223   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:27.243760   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.243773   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:27.243856   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:27.265107   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.265122   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:27.265207   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:27.287276   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.287288   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:27.287361   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:27.309414   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.309429   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:27.309522   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:27.331882   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.331895   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:27.331964   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:27.353906   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.353920   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:27.353995   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:27.374688   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.374701   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:27.374769   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:27.394854   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.394870   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:27.394878   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:27.394886   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:27.434821   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:27.434838   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:27.447530   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:27.447543   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:27.506872   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:27.506887   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:27.506893   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:27.530367   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:27.530385   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:29.577829   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047458812s)
	I0222 21:24:32.078067   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:32.200204   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:32.220966   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.220982   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:32.221056   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:32.243113   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.243127   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:32.243196   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:32.262917   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.262934   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:32.263020   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:32.287162   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.287177   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:32.287249   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:32.309400   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.309417   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:32.309495   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:32.331019   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.331044   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:32.331139   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:32.352279   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.352294   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:32.352397   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:32.374275   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.374291   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:32.374365   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:32.397243   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.397257   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:32.397265   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:32.397274   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:32.414144   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:32.414165   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:32.475086   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:32.475098   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:32.475105   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:32.501379   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:32.501395   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:34.546736   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045354545s)
	I0222 21:24:34.546849   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:34.546856   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:37.087948   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:37.202247   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:37.224572   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.224586   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:37.224654   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:37.244635   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.244650   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:37.244718   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:37.264863   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.264875   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:37.264947   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:37.284270   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.284283   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:37.284354   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:37.303169   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.303182   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:37.303251   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:37.323145   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.323159   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:37.323226   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:37.342371   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.342385   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:37.342465   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:37.362424   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.362437   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:37.362506   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:37.381960   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.381975   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:37.381984   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:37.381991   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:37.422449   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:37.422465   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:37.435063   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:37.435077   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:37.491783   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:37.491795   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:37.491803   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:37.514520   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:37.514534   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:39.560307   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045788831s)
	I0222 21:24:42.060627   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:42.200890   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:42.223080   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.223093   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:42.223161   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:42.241622   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.241635   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:42.241703   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:42.261092   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.261105   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:42.261185   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:42.279927   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.279940   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:42.280010   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:42.298663   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.298678   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:42.298748   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:42.318041   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.318054   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:42.318126   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:42.337580   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.337608   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:42.337726   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:42.356851   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.356864   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:42.356934   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:42.376305   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.376322   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:42.376332   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:42.376342   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:42.398553   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:42.398567   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:44.444548   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045995267s)
	I0222 21:24:44.444672   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:44.444680   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:44.490157   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:44.490178   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:44.503158   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:44.503173   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:44.574050   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:47.074311   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:47.200116   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:47.221119   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.221134   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:47.221207   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:47.240926   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.240941   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:47.241019   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:47.262424   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.262441   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:47.262524   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:47.284987   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.285002   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:47.285075   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:47.305958   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.305989   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:47.306065   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:47.328315   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.328329   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:47.328407   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:47.351450   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.351466   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:47.351542   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:47.382177   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.382192   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:47.382272   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:47.402771   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.402785   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:47.402793   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:47.402801   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:47.446191   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:47.446213   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:47.459385   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:47.459405   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:47.521896   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:47.521911   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:47.521921   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:47.545714   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:47.545732   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:49.593540   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047823947s)
	I0222 21:24:52.094763   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:52.200456   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:52.220544   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.220558   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:52.220629   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:52.241485   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.241498   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:52.241568   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:52.260782   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.260796   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:52.260865   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:52.283002   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.283016   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:52.283087   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:52.302244   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.302258   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:52.302331   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:52.322356   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.322370   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:52.322440   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:52.342684   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.342697   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:52.342766   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:52.363060   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.363074   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:52.363147   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:52.382587   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.382600   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:52.382608   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:52.382617   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:52.425350   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:52.425366   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:52.439375   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:52.439392   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:52.495168   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:52.495187   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:52.495202   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:52.517449   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:52.517464   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:54.563573   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046125189s)
	I0222 21:24:57.064954   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:57.200279   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:57.221403   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.221416   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:57.221485   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:57.241321   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.241335   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:57.241403   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:57.261199   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.261212   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:57.261282   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:57.280957   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.280970   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:57.281037   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:57.301684   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.301699   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:57.301769   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:57.321953   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.321967   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:57.322039   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:57.341179   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.341192   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:57.341262   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:57.359961   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.359975   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:57.360043   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:57.380216   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.380230   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:57.380239   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:57.380246   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:57.403075   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:57.403092   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:59.449866   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04679023s)
	I0222 21:24:59.449977   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:59.449984   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:59.489595   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:59.489611   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:59.501668   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:59.501682   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:59.556882   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:02.057838   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:02.201307   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:02.222714   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.222729   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:02.222804   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:02.243398   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.243412   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:02.243483   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:02.263184   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.263197   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:02.263262   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:02.281793   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.281807   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:02.281875   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:02.302512   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.302528   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:02.302608   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:02.322821   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.322837   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:02.322915   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:02.344399   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.344417   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:02.344495   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:02.368736   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.368750   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:02.368830   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:02.390122   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.390140   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:02.390150   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:02.390164   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:02.403115   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:02.403134   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:02.461237   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:02.461249   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:02.461256   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:02.483492   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:02.483507   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:04.529393   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045900904s)
	I0222 21:25:04.529501   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:04.529508   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:07.070437   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:07.200638   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:07.221463   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.221478   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:07.221554   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:07.241064   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.241078   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:07.241148   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:07.260839   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.260853   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:07.260933   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:07.280771   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.280785   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:07.280856   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:07.301947   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.301961   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:07.302032   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:07.321773   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.321787   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:07.321858   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:07.341792   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.341805   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:07.341875   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:07.360670   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.360683   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:07.360751   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:07.381436   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.381450   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:07.381457   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:07.381465   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:07.422194   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:07.422210   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:07.436117   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:07.436132   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:07.493678   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:07.493690   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:07.493698   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:07.516685   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:07.516700   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:09.565938   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049253068s)
	I0222 21:25:12.066180   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:12.199655   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:12.219039   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.219052   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:12.219122   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:12.239520   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.239534   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:12.239604   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:12.259176   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.259192   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:12.259261   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:12.278594   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.278607   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:12.278679   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:12.298855   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.298868   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:12.298935   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:12.319489   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.319502   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:12.319570   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:12.339947   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.339964   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:12.340044   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:12.358810   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.358825   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:12.358895   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:12.378133   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.378147   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:12.378155   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:12.378162   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:12.418990   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:12.419006   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:12.433288   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:12.433304   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:12.491097   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:12.491110   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:12.491132   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:12.514381   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:12.514395   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:14.561476   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047096367s)
	I0222 21:25:17.061774   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:17.200110   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:17.219377   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.219390   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:17.219464   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:17.238644   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.238659   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:17.238730   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:17.257690   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.257703   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:17.257775   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:17.277668   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.277683   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:17.277754   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:17.298237   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.298251   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:17.298324   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:17.320587   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.320601   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:17.320676   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:17.342983   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.343021   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:17.343103   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:17.363594   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.363608   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:17.363679   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:17.387353   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.387367   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:17.387376   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:17.387384   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:17.399594   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:17.399610   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:17.458669   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:17.458690   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:17.458697   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:17.480809   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:17.480824   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:19.527929   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047121064s)
	I0222 21:25:19.528035   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:19.528042   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:22.068079   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:22.199742   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:22.220599   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.220614   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:22.220691   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:22.239903   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.239917   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:22.239988   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:22.259512   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.259526   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:22.259599   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:22.278962   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.278977   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:22.279046   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:22.297741   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.297756   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:22.297828   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:22.317416   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.317431   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:22.317503   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:22.337010   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.337023   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:22.337093   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:22.357985   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.358000   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:22.358071   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:22.378102   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.378117   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:22.378124   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:22.378135   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:22.390492   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:22.390506   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:22.447564   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:22.447577   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:22.447585   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:22.469681   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:22.469695   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:24.514401   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044720508s)
	I0222 21:25:24.514545   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:24.514553   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:27.052404   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:27.201547   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:27.223305   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.223319   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:27.223397   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:27.242507   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.242521   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:27.242592   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:27.262404   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.262418   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:27.262489   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:27.281495   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.281509   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:27.281577   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:27.301145   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.301160   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:27.301228   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:27.320256   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.320270   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:27.320340   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:27.338699   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.338712   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:27.338783   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:27.359231   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.359245   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:27.359314   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:27.378692   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.378711   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:27.378722   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:27.378731   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:27.400611   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:27.400624   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:29.448836   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04822698s)
	I0222 21:25:29.448943   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:29.448951   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:29.488424   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:29.488437   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:29.501346   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:29.501359   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:29.556292   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:32.058052   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:32.199935   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:32.220875   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.220891   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:32.220965   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:32.241462   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.241477   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:32.241553   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:32.261226   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.261241   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:32.261318   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:32.280663   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.280679   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:32.280761   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:32.300926   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.300940   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:32.301011   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:32.321828   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.321843   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:32.321915   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:32.343584   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.343599   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:32.343669   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:32.364054   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.364068   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:32.364138   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:32.387117   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.387131   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:32.387141   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:32.387148   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:32.429300   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:32.429315   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:32.442548   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:32.442563   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:32.498429   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:32.498455   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:32.498462   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:32.520869   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:32.520883   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:34.565803   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044931263s)
	I0222 21:25:37.068107   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:37.200026   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:37.221998   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.222012   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:37.222081   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:37.241293   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.241307   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:37.241376   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:37.261243   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.261256   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:37.261322   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:37.280484   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.280498   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:37.280568   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:37.300606   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.300627   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:37.300695   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:37.319554   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.319568   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:37.319642   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:37.339882   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.339895   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:37.339962   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:37.359288   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.359302   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:37.359374   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:37.378340   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.378354   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:37.378361   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:37.378368   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:37.418393   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:37.418406   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:37.431958   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:37.431972   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:37.491212   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:37.491225   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:37.491234   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:37.512437   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:37.512452   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:39.558884   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046447278s)
	I0222 21:25:42.059173   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:42.199488   21529 kubeadm.go:637] restartCluster took 4m10.916925108s
	W0222 21:25:42.199575   21529 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0222 21:25:42.199594   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0222 21:25:42.610200   21529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:25:42.620222   21529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:25:42.628036   21529 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:25:42.628090   21529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:25:42.635544   21529 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:25:42.635579   21529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:25:42.687097   21529 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0222 21:25:42.687139   21529 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:25:42.854830   21529 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:25:42.854918   21529 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:25:42.855007   21529 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:25:43.012719   21529 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:25:43.013494   21529 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:25:43.020034   21529 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0222 21:25:43.089352   21529 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:25:43.110912   21529 out.go:204]   - Generating certificates and keys ...
	I0222 21:25:43.111006   21529 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:25:43.111065   21529 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:25:43.111184   21529 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0222 21:25:43.111247   21529 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0222 21:25:43.111378   21529 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0222 21:25:43.111457   21529 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0222 21:25:43.111515   21529 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0222 21:25:43.111592   21529 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0222 21:25:43.111671   21529 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0222 21:25:43.111738   21529 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0222 21:25:43.111774   21529 kubeadm.go:322] [certs] Using the existing "sa" key
	I0222 21:25:43.111821   21529 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:25:43.383566   21529 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:25:43.436537   21529 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:25:43.724236   21529 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:25:43.891703   21529 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:25:43.892290   21529 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:25:43.935494   21529 out.go:204]   - Booting up control plane ...
	I0222 21:25:43.935636   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:25:43.935745   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:25:43.935832   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:25:43.935947   21529 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:25:43.936125   21529 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:26:23.900942   21529 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 21:26:23.901760   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:26:23.902036   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:26:28.903377   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:26:28.903652   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:26:38.904947   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:26:38.905177   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:26:58.906092   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:26:58.906316   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:27:38.907893   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:27:38.908166   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:27:38.908188   21529 kubeadm.go:322] 
	I0222 21:27:38.908251   21529 kubeadm.go:322] Unfortunately, an error has occurred:
	I0222 21:27:38.908301   21529 kubeadm.go:322] 	timed out waiting for the condition
	I0222 21:27:38.908307   21529 kubeadm.go:322] 
	I0222 21:27:38.908344   21529 kubeadm.go:322] This error is likely caused by:
	I0222 21:27:38.908400   21529 kubeadm.go:322] 	- The kubelet is not running
	I0222 21:27:38.908519   21529 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 21:27:38.908534   21529 kubeadm.go:322] 
	I0222 21:27:38.908659   21529 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 21:27:38.908697   21529 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0222 21:27:38.908740   21529 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0222 21:27:38.908753   21529 kubeadm.go:322] 
	I0222 21:27:38.908862   21529 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 21:27:38.908968   21529 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0222 21:27:38.909074   21529 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0222 21:27:38.909151   21529 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0222 21:27:38.909247   21529 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0222 21:27:38.909287   21529 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0222 21:27:38.911820   21529 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 21:27:38.911895   21529 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 21:27:38.912000   21529 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0222 21:27:38.912098   21529 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:27:38.912172   21529 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 21:27:38.912236   21529 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0222 21:27:38.912357   21529 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0222 21:27:38.912384   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0222 21:27:39.327021   21529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:27:39.337465   21529 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:27:39.337526   21529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:27:39.345156   21529 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:27:39.345175   21529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:27:39.392174   21529 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0222 21:27:39.392221   21529 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:27:39.558482   21529 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:27:39.558560   21529 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:27:39.558673   21529 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:27:39.717320   21529 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:27:39.718032   21529 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:27:39.724876   21529 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0222 21:27:39.796182   21529 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:27:39.817754   21529 out.go:204]   - Generating certificates and keys ...
	I0222 21:27:39.817896   21529 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:27:39.817972   21529 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:27:39.818058   21529 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0222 21:27:39.818127   21529 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0222 21:27:39.818235   21529 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0222 21:27:39.818334   21529 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0222 21:27:39.818421   21529 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0222 21:27:39.818474   21529 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0222 21:27:39.818551   21529 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0222 21:27:39.818632   21529 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0222 21:27:39.818666   21529 kubeadm.go:322] [certs] Using the existing "sa" key
	I0222 21:27:39.818716   21529 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:27:39.884743   21529 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:27:39.946621   21529 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:27:40.262279   21529 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:27:40.327151   21529 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:27:40.328024   21529 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:27:40.349755   21529 out.go:204]   - Booting up control plane ...
	I0222 21:27:40.349934   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:27:40.350156   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:27:40.350285   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:27:40.350439   21529 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:27:40.350825   21529 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:28:20.339172   21529 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 21:28:20.340259   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:28:20.340453   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:28:25.340823   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:28:25.340978   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:28:35.342169   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:28:35.342414   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:28:55.342655   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:28:55.342827   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:29:35.344209   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:29:35.344445   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:29:35.344456   21529 kubeadm.go:322] 
	I0222 21:29:35.344506   21529 kubeadm.go:322] Unfortunately, an error has occurred:
	I0222 21:29:35.344552   21529 kubeadm.go:322] 	timed out waiting for the condition
	I0222 21:29:35.344560   21529 kubeadm.go:322] 
	I0222 21:29:35.344600   21529 kubeadm.go:322] This error is likely caused by:
	I0222 21:29:35.344635   21529 kubeadm.go:322] 	- The kubelet is not running
	I0222 21:29:35.344768   21529 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 21:29:35.344783   21529 kubeadm.go:322] 
	I0222 21:29:35.344897   21529 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 21:29:35.344941   21529 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0222 21:29:35.344982   21529 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0222 21:29:35.344988   21529 kubeadm.go:322] 
	I0222 21:29:35.345111   21529 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 21:29:35.345221   21529 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0222 21:29:35.345321   21529 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0222 21:29:35.345379   21529 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0222 21:29:35.345468   21529 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0222 21:29:35.345511   21529 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0222 21:29:35.347495   21529 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 21:29:35.347580   21529 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 21:29:35.347697   21529 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0222 21:29:35.347781   21529 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:29:35.347866   21529 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 21:29:35.347921   21529 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0222 21:29:35.347945   21529 kubeadm.go:403] StartCluster complete in 8m4.096882529s
	I0222 21:29:35.348037   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:29:35.368504   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.368518   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:29:35.368608   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:29:35.390318   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.390332   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:29:35.390403   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:29:35.410592   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.410608   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:29:35.410678   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:29:35.429681   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.429696   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:29:35.429766   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:29:35.451049   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.451063   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:29:35.451140   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:29:35.475092   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.475107   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:29:35.475191   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:29:35.496009   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.496024   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:29:35.496097   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:29:35.515551   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.515569   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:29:35.515644   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:29:35.537284   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.537299   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:29:35.537307   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:29:35.537315   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:29:35.582497   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:29:35.582517   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:29:35.599453   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:29:35.599471   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:29:35.668710   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:29:35.668723   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:29:35.668732   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:29:35.695984   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:29:35.696004   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:29:37.746902   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05091002s)
	W0222 21:29:37.747059   21529 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0222 21:29:37.747081   21529 out.go:239] * 
	* 
	W0222 21:29:37.747207   21529 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 21:29:37.747229   21529 out.go:239] * 
	* 
	W0222 21:29:37.747988   21529 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0222 21:29:37.831476   21529 out.go:177] 
	W0222 21:29:37.873442   21529 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 21:29:37.873536   21529 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0222 21:29:37.873581   21529 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0222 21:29:37.894540   21529 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-865000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-865000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-865000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c",
	        "Created": "2023-02-23T05:15:31.417090555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T05:21:27.411292149Z",
	            "FinishedAt": "2023-02-23T05:21:24.519545355Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hosts",
	        "LogPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c-json.log",
	        "Name": "/old-k8s-version-865000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-865000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-865000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-865000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-865000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-865000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "351da03ebb5828b9ae09ef98a1a92ca983c146b1286e410710fdcd0e8b997b44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54722"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54723"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54724"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54725"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54726"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/351da03ebb58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-865000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "106a8b195383",
	                        "old-k8s-version-865000"
	                    ],
	                    "NetworkID": "947893b68cb410e9e5982aa5b8afeae1844c1ff30155168ea70efca5bffdb638",
	                    "EndpointID": "514d220541551db5b6e5df3d10fa1937f8cfad31f95838367761a5c304074af5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 2 (449.416016ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-865000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-865000 logs -n 25: (3.859004834s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-310000 sudo                            | kubenet-310000         | jenkins | v1.29.0 | 22 Feb 23 21:16 PST | 22 Feb 23 21:16 PST |
	|         | containerd config dump                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-310000 sudo                            | kubenet-310000         | jenkins | v1.29.0 | 22 Feb 23 21:16 PST |                     |
	|         | systemctl status crio --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-310000 sudo                            | kubenet-310000         | jenkins | v1.29.0 | 22 Feb 23 21:16 PST | 22 Feb 23 21:16 PST |
	|         | systemctl cat crio --no-pager                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-310000 sudo find                       | kubenet-310000         | jenkins | v1.29.0 | 22 Feb 23 21:16 PST | 22 Feb 23 21:16 PST |
	|         | /etc/crio -type f -exec sh -c                     |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-310000 sudo crio                       | kubenet-310000         | jenkins | v1.29.0 | 22 Feb 23 21:16 PST | 22 Feb 23 21:16 PST |
	|         | config                                            |                        |         |         |                     |                     |
	| delete  | -p kubenet-310000                                 | kubenet-310000         | jenkins | v1.29.0 | 22 Feb 23 21:16 PST | 22 Feb 23 21:16 PST |
	| start   | -p no-preload-081000                              | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:16 PST | 22 Feb 23 21:17 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-081000        | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:17 PST | 22 Feb 23 21:17 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-081000                              | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:17 PST | 22 Feb 23 21:17 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-081000             | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:17 PST | 22 Feb 23 21:17 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-081000                              | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:17 PST | 22 Feb 23 21:22 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-865000   | old-k8s-version-865000 | jenkins | v1.29.0 | 22 Feb 23 21:19 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-865000                         | old-k8s-version-865000 | jenkins | v1.29.0 | 22 Feb 23 21:21 PST | 22 Feb 23 21:21 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-865000        | old-k8s-version-865000 | jenkins | v1.29.0 | 22 Feb 23 21:21 PST | 22 Feb 23 21:21 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-865000                         | old-k8s-version-865000 | jenkins | v1.29.0 | 22 Feb 23 21:21 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-081000 sudo                         | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:22 PST | 22 Feb 23 21:22 PST |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-081000                              | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:22 PST | 22 Feb 23 21:22 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-081000                              | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:23 PST | 22 Feb 23 21:23 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-081000                              | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:23 PST | 22 Feb 23 21:23 PST |
	| delete  | -p no-preload-081000                              | no-preload-081000      | jenkins | v1.29.0 | 22 Feb 23 21:23 PST | 22 Feb 23 21:23 PST |
	| start   | -p embed-certs-677000                             | embed-certs-677000     | jenkins | v1.29.0 | 22 Feb 23 21:23 PST | 22 Feb 23 21:24 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-677000       | embed-certs-677000     | jenkins | v1.29.0 | 22 Feb 23 21:24 PST | 22 Feb 23 21:24 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-677000                             | embed-certs-677000     | jenkins | v1.29.0 | 22 Feb 23 21:24 PST | 22 Feb 23 21:24 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-677000            | embed-certs-677000     | jenkins | v1.29.0 | 22 Feb 23 21:24 PST | 22 Feb 23 21:24 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-677000                             | embed-certs-677000     | jenkins | v1.29.0 | 22 Feb 23 21:24 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/22 21:24:27
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0222 21:24:27.892617   22044 out.go:296] Setting OutFile to fd 1 ...
	I0222 21:24:27.892795   22044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:24:27.892800   22044 out.go:309] Setting ErrFile to fd 2...
	I0222 21:24:27.892804   22044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:24:27.892924   22044 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 21:24:27.894346   22044 out.go:303] Setting JSON to false
	I0222 21:24:27.913273   22044 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5042,"bootTime":1677124825,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 21:24:27.913402   22044 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 21:24:27.935868   22044 out.go:177] * [embed-certs-677000] minikube v1.29.0 on Darwin 13.2
	I0222 21:24:27.978319   22044 notify.go:220] Checking for updates...
	I0222 21:24:27.999887   22044 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 21:24:28.021101   22044 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:24:28.043900   22044 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 21:24:28.065156   22044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 21:24:28.086028   22044 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 21:24:28.108833   22044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 21:24:28.130833   22044 config.go:182] Loaded profile config "embed-certs-677000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:24:28.131490   22044 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 21:24:28.193292   22044 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 21:24:28.193412   22044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:24:28.335856   22044 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:24:28.243839412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:24:28.357746   22044 out.go:177] * Using the docker driver based on existing profile
	I0222 21:24:28.379413   22044 start.go:296] selected driver: docker
	I0222 21:24:28.379443   22044 start.go:857] validating driver "docker" against &{Name:embed-certs-677000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-677000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:24:28.379593   22044 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 21:24:28.383324   22044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:24:28.528651   22044 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:24:28.433788014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:24:28.528839   22044 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0222 21:24:28.528863   22044 cni.go:84] Creating CNI manager for ""
	I0222 21:24:28.528880   22044 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:24:28.528890   22044 start_flags.go:319] config:
	{Name:embed-certs-677000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-677000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:24:28.549754   22044 out.go:177] * Starting control plane node embed-certs-677000 in cluster embed-certs-677000
	I0222 21:24:28.570711   22044 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 21:24:28.591493   22044 out.go:177] * Pulling base image ...
	I0222 21:24:28.633707   22044 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 21:24:28.633760   22044 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 21:24:28.633771   22044 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0222 21:24:28.633782   22044 cache.go:57] Caching tarball of preloaded images
	I0222 21:24:28.633898   22044 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 21:24:28.633908   22044 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0222 21:24:28.634345   22044 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000/config.json ...
	I0222 21:24:28.690009   22044 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 21:24:28.690029   22044 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 21:24:28.690049   22044 cache.go:193] Successfully downloaded all kic artifacts
	I0222 21:24:28.690099   22044 start.go:364] acquiring machines lock for embed-certs-677000: {Name:mke7836da6c74d78d0e7dec838cc98ac2bf403a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 21:24:28.690182   22044 start.go:368] acquired machines lock for "embed-certs-677000" in 63.087µs
	I0222 21:24:28.690218   22044 start.go:96] Skipping create...Using existing machine configuration
	I0222 21:24:28.690227   22044 fix.go:55] fixHost starting: 
	I0222 21:24:28.690463   22044 cli_runner.go:164] Run: docker container inspect embed-certs-677000 --format={{.State.Status}}
	I0222 21:24:28.748852   22044 fix.go:103] recreateIfNeeded on embed-certs-677000: state=Stopped err=<nil>
	W0222 21:24:28.748878   22044 fix.go:129] unexpected machine state, will restart: <nil>
	I0222 21:24:28.792576   22044 out.go:177] * Restarting existing docker container for "embed-certs-677000" ...
	I0222 21:24:27.060593   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:27.200227   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:27.222130   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.222146   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:27.222223   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:27.243760   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.243773   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:27.243856   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:27.265107   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.265122   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:27.265207   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:27.287276   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.287288   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:27.287361   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:27.309414   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.309429   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:27.309522   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:27.331882   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.331895   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:27.331964   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:27.353906   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.353920   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:27.353995   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:27.374688   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.374701   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:27.374769   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:27.394854   21529 logs.go:278] 0 containers: []
	W0222 21:24:27.394870   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:27.394878   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:27.394886   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:27.434821   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:27.434838   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:27.447530   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:27.447543   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:27.506872   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:27.506887   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:27.506893   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:27.530367   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:27.530385   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:29.577829   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047458812s)
	I0222 21:24:28.814743   22044 cli_runner.go:164] Run: docker start embed-certs-677000
	I0222 21:24:29.137985   22044 cli_runner.go:164] Run: docker container inspect embed-certs-677000 --format={{.State.Status}}
	I0222 21:24:29.200624   22044 kic.go:426] container "embed-certs-677000" state is running.
	I0222 21:24:29.201189   22044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-677000
	I0222 21:24:29.267581   22044 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000/config.json ...
	I0222 21:24:29.268034   22044 machine.go:88] provisioning docker machine ...
	I0222 21:24:29.268082   22044 ubuntu.go:169] provisioning hostname "embed-certs-677000"
	I0222 21:24:29.268175   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:29.337909   22044 main.go:141] libmachine: Using SSH client type: native
	I0222 21:24:29.338377   22044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54877 <nil> <nil>}
	I0222 21:24:29.338407   22044 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-677000 && echo "embed-certs-677000" | sudo tee /etc/hostname
	I0222 21:24:29.483799   22044 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-677000
	
	I0222 21:24:29.483885   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:29.544368   22044 main.go:141] libmachine: Using SSH client type: native
	I0222 21:24:29.544720   22044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54877 <nil> <nil>}
	I0222 21:24:29.544733   22044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-677000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-677000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-677000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 21:24:29.680328   22044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:24:29.680352   22044 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 21:24:29.680372   22044 ubuntu.go:177] setting up certificates
	I0222 21:24:29.680379   22044 provision.go:83] configureAuth start
	I0222 21:24:29.680457   22044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-677000
	I0222 21:24:29.738972   22044 provision.go:138] copyHostCerts
	I0222 21:24:29.739065   22044 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 21:24:29.739077   22044 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 21:24:29.739167   22044 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 21:24:29.739380   22044 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 21:24:29.739388   22044 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 21:24:29.739446   22044 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 21:24:29.739603   22044 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 21:24:29.739609   22044 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 21:24:29.739669   22044 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 21:24:29.739797   22044 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.embed-certs-677000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-677000]
	I0222 21:24:29.836970   22044 provision.go:172] copyRemoteCerts
	I0222 21:24:29.837028   22044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 21:24:29.837097   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:29.896176   22044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54877 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/embed-certs-677000/id_rsa Username:docker}
	I0222 21:24:29.991992   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 21:24:30.009203   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0222 21:24:30.026805   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0222 21:24:30.044845   22044 provision.go:86] duration metric: configureAuth took 364.456578ms
	I0222 21:24:30.044859   22044 ubuntu.go:193] setting minikube options for container-runtime
	I0222 21:24:30.045019   22044 config.go:182] Loaded profile config "embed-certs-677000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:24:30.045091   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:30.108562   22044 main.go:141] libmachine: Using SSH client type: native
	I0222 21:24:30.108928   22044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54877 <nil> <nil>}
	I0222 21:24:30.108937   22044 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 21:24:30.245278   22044 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 21:24:30.245294   22044 ubuntu.go:71] root file system type: overlay
	I0222 21:24:30.245398   22044 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 21:24:30.245479   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:30.304910   22044 main.go:141] libmachine: Using SSH client type: native
	I0222 21:24:30.305308   22044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54877 <nil> <nil>}
	I0222 21:24:30.305357   22044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 21:24:30.449206   22044 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 21:24:30.449313   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:30.508067   22044 main.go:141] libmachine: Using SSH client type: native
	I0222 21:24:30.508418   22044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 54877 <nil> <nil>}
	I0222 21:24:30.508431   22044 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 21:24:30.649565   22044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:24:30.649585   22044 machine.go:91] provisioned docker machine in 1.381561128s
	I0222 21:24:30.649596   22044 start.go:300] post-start starting for "embed-certs-677000" (driver="docker")
	I0222 21:24:30.649601   22044 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 21:24:30.649692   22044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 21:24:30.649751   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:30.708285   22044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54877 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/embed-certs-677000/id_rsa Username:docker}
	I0222 21:24:30.803614   22044 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 21:24:30.808296   22044 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 21:24:30.808314   22044 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 21:24:30.808328   22044 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 21:24:30.808333   22044 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 21:24:30.808341   22044 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 21:24:30.808435   22044 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 21:24:30.808596   22044 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 21:24:30.808779   22044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 21:24:30.816677   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:24:30.835553   22044 start.go:303] post-start completed in 185.950906ms
	I0222 21:24:30.835635   22044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 21:24:30.835691   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:30.897420   22044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54877 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/embed-certs-677000/id_rsa Username:docker}
	I0222 21:24:30.992619   22044 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 21:24:30.997492   22044 fix.go:57] fixHost completed within 2.307292634s
	I0222 21:24:30.997511   22044 start.go:83] releasing machines lock for "embed-certs-677000", held for 2.307351088s
	I0222 21:24:30.997602   22044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-677000
	I0222 21:24:31.056631   22044 ssh_runner.go:195] Run: cat /version.json
	I0222 21:24:31.056662   22044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0222 21:24:31.056699   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:31.056741   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:31.141387   22044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54877 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/embed-certs-677000/id_rsa Username:docker}
	I0222 21:24:31.141519   22044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54877 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/embed-certs-677000/id_rsa Username:docker}
	I0222 21:24:31.287568   22044 ssh_runner.go:195] Run: systemctl --version
	I0222 21:24:31.292517   22044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 21:24:31.297720   22044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 21:24:31.313589   22044 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 21:24:31.313689   22044 ssh_runner.go:195] Run: which cri-dockerd
	I0222 21:24:31.317839   22044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0222 21:24:31.325528   22044 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0222 21:24:31.339068   22044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0222 21:24:31.347099   22044 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0222 21:24:31.347116   22044 start.go:485] detecting cgroup driver to use...
	I0222 21:24:31.347129   22044 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:24:31.347240   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:24:31.360674   22044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0222 21:24:31.369775   22044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 21:24:31.378760   22044 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 21:24:31.378819   22044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 21:24:31.387715   22044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:24:31.395989   22044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 21:24:31.404367   22044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:24:31.413413   22044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 21:24:31.421509   22044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 21:24:31.430260   22044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 21:24:31.437573   22044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 21:24:31.444907   22044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:24:31.515909   22044 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 21:24:31.600330   22044 start.go:485] detecting cgroup driver to use...
	I0222 21:24:31.600357   22044 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:24:31.600422   22044 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 21:24:31.612663   22044 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 21:24:31.612742   22044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 21:24:31.624996   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:24:31.639869   22044 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 21:24:31.751062   22044 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 21:24:31.851606   22044 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 21:24:31.851624   22044 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 21:24:31.865958   22044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:24:31.960142   22044 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 21:24:32.236136   22044 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:24:32.304341   22044 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0222 21:24:32.380428   22044 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:24:32.456733   22044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:24:32.531339   22044 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0222 21:24:32.543778   22044 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0222 21:24:32.543860   22044 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0222 21:24:32.548424   22044 start.go:553] Will wait 60s for crictl version
	I0222 21:24:32.548490   22044 ssh_runner.go:195] Run: which crictl
	I0222 21:24:32.552395   22044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0222 21:24:32.665254   22044 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0222 21:24:32.665335   22044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:24:32.691527   22044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:24:32.740931   22044 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0222 21:24:32.741125   22044 cli_runner.go:164] Run: docker exec -t embed-certs-677000 dig +short host.docker.internal
	I0222 21:24:32.859085   22044 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 21:24:32.859213   22044 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 21:24:32.863941   22044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:24:32.873944   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:32.078067   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:32.200204   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:32.220966   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.220982   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:32.221056   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:32.243113   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.243127   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:32.243196   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:32.262917   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.262934   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:32.263020   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:32.287162   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.287177   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:32.287249   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:32.309400   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.309417   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:32.309495   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:32.331019   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.331044   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:32.331139   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:32.352279   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.352294   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:32.352397   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:32.374275   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.374291   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:32.374365   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:32.397243   21529 logs.go:278] 0 containers: []
	W0222 21:24:32.397257   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:32.397265   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:32.397274   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:32.414144   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:32.414165   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:32.475086   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:32.475098   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:32.475105   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:32.501379   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:32.501395   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:34.546736   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045354545s)
	I0222 21:24:34.546849   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:34.546856   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:32.933393   22044 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 21:24:32.933465   22044 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:24:32.954494   22044 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0222 21:24:32.954512   22044 docker.go:560] Images already preloaded, skipping extraction
	I0222 21:24:32.954599   22044 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:24:32.975256   22044 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0222 21:24:32.975273   22044 cache_images.go:84] Images are preloaded, skipping loading
	I0222 21:24:32.975372   22044 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 21:24:33.000385   22044 cni.go:84] Creating CNI manager for ""
	I0222 21:24:33.000411   22044 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:24:33.000438   22044 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 21:24:33.000455   22044 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-677000 NodeName:embed-certs-677000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 21:24:33.000569   22044 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-677000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 21:24:33.000642   22044 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-677000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:embed-certs-677000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 21:24:33.000711   22044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0222 21:24:33.009130   22044 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 21:24:33.009184   22044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 21:24:33.016666   22044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0222 21:24:33.030257   22044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 21:24:33.045303   22044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0222 21:24:33.060230   22044 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0222 21:24:33.064820   22044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:24:33.076411   22044 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000 for IP: 192.168.67.2
	I0222 21:24:33.076435   22044 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:24:33.076610   22044 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 21:24:33.076664   22044 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 21:24:33.076782   22044 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000/client.key
	I0222 21:24:33.076858   22044 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000/apiserver.key.c7fa3a9e
	I0222 21:24:33.076937   22044 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000/proxy-client.key
	I0222 21:24:33.077168   22044 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 21:24:33.077211   22044 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 21:24:33.077222   22044 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 21:24:33.077256   22044 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 21:24:33.077292   22044 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 21:24:33.077322   22044 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 21:24:33.077392   22044 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:24:33.077973   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 21:24:33.097050   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0222 21:24:33.115892   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 21:24:33.134331   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/embed-certs-677000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0222 21:24:33.153303   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 21:24:33.171718   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 21:24:33.189938   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 21:24:33.207988   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 21:24:33.226296   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 21:24:33.244486   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 21:24:33.262520   22044 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 21:24:33.280287   22044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 21:24:33.293355   22044 ssh_runner.go:195] Run: openssl version
	I0222 21:24:33.299293   22044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 21:24:33.307828   22044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:24:33.312131   22044 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:24:33.312178   22044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:24:33.317884   22044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 21:24:33.325695   22044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 21:24:33.334138   22044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 21:24:33.338346   22044 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 21:24:33.338393   22044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 21:24:33.343962   22044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 21:24:33.351932   22044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 21:24:33.360280   22044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 21:24:33.364592   22044 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 21:24:33.364639   22044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 21:24:33.370027   22044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 21:24:33.377934   22044 kubeadm.go:401] StartCluster: {Name:embed-certs-677000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-677000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:24:33.378123   22044 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:24:33.397684   22044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 21:24:33.405777   22044 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0222 21:24:33.405796   22044 kubeadm.go:633] restartCluster start
	I0222 21:24:33.405848   22044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0222 21:24:33.412950   22044 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:33.413028   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:24:33.472547   22044 kubeconfig.go:135] verify returned: extract IP: "embed-certs-677000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:24:33.472715   22044 kubeconfig.go:146] "embed-certs-677000" context is missing from /Users/jenkins/minikube-integration/15909-2664/kubeconfig - will repair!
	I0222 21:24:33.473095   22044 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:24:33.474690   22044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0222 21:24:33.482788   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:33.482851   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:33.491807   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:33.991958   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:33.992198   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:34.003287   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:34.492785   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:34.492907   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:34.503962   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:34.993994   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:34.994180   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:35.005483   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:35.493907   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:35.494093   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:35.504982   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:35.993228   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:35.993389   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:36.004263   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:36.491984   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:36.492091   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:36.502856   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:36.992605   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:36.992661   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:37.002041   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:37.491829   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:37.491919   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:37.501247   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:37.087948   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:37.202247   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:37.224572   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.224586   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:37.224654   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:37.244635   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.244650   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:37.244718   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:37.264863   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.264875   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:37.264947   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:37.284270   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.284283   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:37.284354   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:37.303169   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.303182   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:37.303251   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:37.323145   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.323159   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:37.323226   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:37.342371   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.342385   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:37.342465   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:37.362424   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.362437   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:37.362506   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:37.381960   21529 logs.go:278] 0 containers: []
	W0222 21:24:37.381975   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:37.381984   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:37.381991   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:37.422449   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:37.422465   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:37.435063   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:37.435077   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:37.491783   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:37.491795   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:37.491803   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:37.514520   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:37.514534   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:39.560307   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045788831s)
	I0222 21:24:37.993863   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:37.994033   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:38.005274   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:38.493881   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:38.494034   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:38.505247   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:38.992300   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:38.992503   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:39.002831   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:39.493110   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:39.493248   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:39.504191   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:39.992460   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:39.992630   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:40.003696   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:40.493196   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:40.493349   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:40.504914   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:40.993182   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:40.993349   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:41.004534   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:41.493848   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:41.494059   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:41.504966   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:41.992235   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:41.992384   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:42.003702   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:42.491845   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:42.491996   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:42.502253   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:42.060627   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:42.200890   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:42.223080   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.223093   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:42.223161   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:42.241622   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.241635   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:42.241703   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:42.261092   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.261105   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:42.261185   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:42.279927   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.279940   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:42.280010   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:42.298663   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.298678   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:42.298748   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:42.318041   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.318054   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:42.318126   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:42.337580   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.337608   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:42.337726   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:42.356851   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.356864   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:42.356934   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:42.376305   21529 logs.go:278] 0 containers: []
	W0222 21:24:42.376322   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:42.376332   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:42.376342   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:42.398553   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:42.398567   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:44.444548   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045995267s)
	I0222 21:24:44.444672   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:44.444680   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:44.490157   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:44.490178   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:44.503158   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:44.503173   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:44.574050   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:42.991913   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:42.992064   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:43.003101   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:43.492481   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:43.492624   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:43.503354   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:43.503370   22044 api_server.go:165] Checking apiserver status ...
	I0222 21:24:43.503423   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:24:43.511799   22044 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:43.511811   22044 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0222 21:24:43.511821   22044 kubeadm.go:1120] stopping kube-system containers ...
	I0222 21:24:43.511886   22044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:24:43.534053   22044 docker.go:456] Stopping containers: [3554125bad67 87a618811540 b3f17c14ec53 273470c1c118 be3cf07370c9 257b655706f7 3610537d52e7 6753652f346f 2c345dfff7bf 69e675bb7a91 b4fcaf917a9b 8699dcee456e f85105bf3832 b0bb22b20077 66f51d4f9836 13b9500c54e7]
	I0222 21:24:43.534140   22044 ssh_runner.go:195] Run: docker stop 3554125bad67 87a618811540 b3f17c14ec53 273470c1c118 be3cf07370c9 257b655706f7 3610537d52e7 6753652f346f 2c345dfff7bf 69e675bb7a91 b4fcaf917a9b 8699dcee456e f85105bf3832 b0bb22b20077 66f51d4f9836 13b9500c54e7
	I0222 21:24:43.554338   22044 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0222 21:24:43.566015   22044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:24:43.574431   22044 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 23 05:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 23 05:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb 23 05:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 23 05:23 /etc/kubernetes/scheduler.conf
	
	I0222 21:24:43.574505   22044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0222 21:24:43.582980   22044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0222 21:24:43.591409   22044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0222 21:24:43.599887   22044 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:43.599955   22044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0222 21:24:43.607714   22044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0222 21:24:43.615733   22044 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:24:43.615789   22044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0222 21:24:43.623425   22044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:24:43.631805   22044 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0222 21:24:43.631859   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:24:43.686863   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:24:44.213899   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:24:44.358572   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:24:44.431180   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:24:44.546655   22044 api_server.go:51] waiting for apiserver process to appear ...
	I0222 21:24:44.546724   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:45.058994   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:45.559081   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:46.060083   22044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:46.072522   22044 api_server.go:71] duration metric: took 1.525889387s to wait for apiserver process to appear ...
	I0222 21:24:46.072538   22044 api_server.go:87] waiting for apiserver healthz status ...
	I0222 21:24:46.072552   22044 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54876/healthz ...
	I0222 21:24:48.134231   22044 api_server.go:278] https://127.0.0.1:54876/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0222 21:24:48.134245   22044 api_server.go:102] status: https://127.0.0.1:54876/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0222 21:24:48.634633   22044 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54876/healthz ...
	I0222 21:24:48.641833   22044 api_server.go:278] https://127.0.0.1:54876/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0222 21:24:48.641851   22044 api_server.go:102] status: https://127.0.0.1:54876/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:24:49.134509   22044 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54876/healthz ...
	I0222 21:24:49.139772   22044 api_server.go:278] https://127.0.0.1:54876/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0222 21:24:49.139788   22044 api_server.go:102] status: https://127.0.0.1:54876/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:24:49.634734   22044 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54876/healthz ...
	I0222 21:24:49.641076   22044 api_server.go:278] https://127.0.0.1:54876/healthz returned 200:
	ok
	I0222 21:24:49.647926   22044 api_server.go:140] control plane version: v1.26.1
	I0222 21:24:49.647937   22044 api_server.go:130] duration metric: took 3.575441071s to wait for apiserver health ...
	I0222 21:24:49.647943   22044 cni.go:84] Creating CNI manager for ""
	I0222 21:24:49.647952   22044 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:24:49.669730   22044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0222 21:24:47.074311   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:47.200116   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:47.221119   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.221134   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:47.221207   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:47.240926   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.240941   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:47.241019   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:47.262424   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.262441   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:47.262524   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:47.284987   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.285002   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:47.285075   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:47.305958   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.305989   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:47.306065   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:47.328315   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.328329   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:47.328407   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:47.351450   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.351466   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:47.351542   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:47.382177   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.382192   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:47.382272   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:47.402771   21529 logs.go:278] 0 containers: []
	W0222 21:24:47.402785   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:47.402793   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:47.402801   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:47.446191   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:47.446213   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:47.459385   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:47.459405   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:47.521896   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:47.521911   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:47.521921   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:47.545714   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:47.545732   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:49.593540   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047823947s)
	I0222 21:24:49.691168   22044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0222 21:24:49.701471   22044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0222 21:24:49.717524   22044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0222 21:24:49.725998   22044 system_pods.go:59] 8 kube-system pods found
	I0222 21:24:49.726015   22044 system_pods.go:61] "coredns-787d4945fb-689bx" [369a5825-2fda-44dc-b4fc-f2dcac86c884] Running
	I0222 21:24:49.726023   22044 system_pods.go:61] "etcd-embed-certs-677000" [95356ec8-09a9-4461-8437-941afcf8475a] Running
	I0222 21:24:49.726029   22044 system_pods.go:61] "kube-apiserver-embed-certs-677000" [75ef4762-b6b0-4a34-a887-62bf0ed00f1e] Running
	I0222 21:24:49.726040   22044 system_pods.go:61] "kube-controller-manager-embed-certs-677000" [5e720ff6-5303-4af5-b56f-f40e5dfaadef] Running
	I0222 21:24:49.726052   22044 system_pods.go:61] "kube-proxy-ptzqj" [88dd2ede-dd73-41b5-887f-d188d554607b] Running
	I0222 21:24:49.726065   22044 system_pods.go:61] "kube-scheduler-embed-certs-677000" [c7eec4f8-9a83-4453-90c3-da4fef5cba69] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0222 21:24:49.726076   22044 system_pods.go:61] "metrics-server-7997d45854-nqsq8" [68603436-c82f-414d-9e97-116155a16a0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0222 21:24:49.726081   22044 system_pods.go:61] "storage-provisioner" [7f474404-d11f-4d6a-849a-a5cc233b4a30] Running
	I0222 21:24:49.726085   22044 system_pods.go:74] duration metric: took 8.549401ms to wait for pod list to return data ...
	I0222 21:24:49.726093   22044 node_conditions.go:102] verifying NodePressure condition ...
	I0222 21:24:49.729429   22044 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 21:24:49.729454   22044 node_conditions.go:123] node cpu capacity is 6
	I0222 21:24:49.729473   22044 node_conditions.go:105] duration metric: took 3.374103ms to run NodePressure ...
	I0222 21:24:49.729493   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:24:50.048515   22044 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0222 21:24:50.054827   22044 kubeadm.go:784] kubelet initialised
	I0222 21:24:50.054861   22044 kubeadm.go:785] duration metric: took 6.311035ms waiting for restarted kubelet to initialise ...
	I0222 21:24:50.054868   22044 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 21:24:50.060485   22044 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-689bx" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.067444   22044 pod_ready.go:92] pod "coredns-787d4945fb-689bx" in "kube-system" namespace has status "Ready":"True"
	I0222 21:24:50.067457   22044 pod_ready.go:81] duration metric: took 6.956022ms waiting for pod "coredns-787d4945fb-689bx" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.067467   22044 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.079072   22044 pod_ready.go:92] pod "etcd-embed-certs-677000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:24:50.079086   22044 pod_ready.go:81] duration metric: took 11.612525ms waiting for pod "etcd-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.079098   22044 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.087530   22044 pod_ready.go:92] pod "kube-apiserver-embed-certs-677000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:24:50.087556   22044 pod_ready.go:81] duration metric: took 8.44814ms waiting for pod "kube-apiserver-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.087584   22044 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.135419   22044 pod_ready.go:92] pod "kube-controller-manager-embed-certs-677000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:24:50.135429   22044 pod_ready.go:81] duration metric: took 47.841634ms waiting for pod "kube-controller-manager-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.135436   22044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ptzqj" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.532669   22044 pod_ready.go:92] pod "kube-proxy-ptzqj" in "kube-system" namespace has status "Ready":"True"
	I0222 21:24:50.532680   22044 pod_ready.go:81] duration metric: took 397.244276ms waiting for pod "kube-proxy-ptzqj" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:50.532690   22044 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:52.094763   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:52.200456   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:52.220544   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.220558   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:52.220629   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:52.241485   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.241498   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:52.241568   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:52.260782   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.260796   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:52.260865   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:52.283002   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.283016   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:52.283087   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:52.302244   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.302258   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:52.302331   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:52.322356   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.322370   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:52.322440   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:52.342684   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.342697   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:52.342766   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:52.363060   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.363074   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:52.363147   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:52.382587   21529 logs.go:278] 0 containers: []
	W0222 21:24:52.382600   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:52.382608   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:52.382617   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:52.425350   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:52.425366   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:52.439375   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:52.439392   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:52.495168   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:52.495187   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:52.495202   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:52.517449   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:52.517464   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:54.563573   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046125189s)
	I0222 21:24:52.928754   22044 pod_ready.go:102] pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace has status "Ready":"False"
	I0222 21:24:54.929736   22044 pod_ready.go:102] pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace has status "Ready":"False"
	I0222 21:24:56.930316   22044 pod_ready.go:102] pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace has status "Ready":"False"
	I0222 21:24:57.064954   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:24:57.200279   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:24:57.221403   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.221416   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:24:57.221485   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:24:57.241321   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.241335   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:24:57.241403   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:24:57.261199   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.261212   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:24:57.261282   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:24:57.280957   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.280970   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:24:57.281037   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:24:57.301684   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.301699   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:24:57.301769   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:24:57.321953   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.321967   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:24:57.322039   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:24:57.341179   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.341192   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:24:57.341262   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:24:57.359961   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.359975   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:24:57.360043   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:24:57.380216   21529 logs.go:278] 0 containers: []
	W0222 21:24:57.380230   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:24:57.380239   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:24:57.380246   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:24:57.403075   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:24:57.403092   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:24:59.449866   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04679023s)
	I0222 21:24:59.449977   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:24:59.449984   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:24:59.489595   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:24:59.489611   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:24:59.501668   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:24:59.501682   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:24:59.556882   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:24:58.931316   22044 pod_ready.go:102] pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace has status "Ready":"False"
	I0222 21:24:59.428463   22044 pod_ready.go:92] pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:24:59.428478   22044 pod_ready.go:81] duration metric: took 8.895900227s waiting for pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:24:59.428487   22044 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace to be "Ready" ...
	I0222 21:25:01.440983   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:02.057838   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:02.201307   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:02.222714   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.222729   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:02.222804   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:02.243398   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.243412   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:02.243483   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:02.263184   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.263197   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:02.263262   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:02.281793   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.281807   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:02.281875   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:02.302512   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.302528   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:02.302608   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:02.322821   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.322837   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:02.322915   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:02.344399   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.344417   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:02.344495   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:02.368736   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.368750   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:02.368830   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:02.390122   21529 logs.go:278] 0 containers: []
	W0222 21:25:02.390140   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:02.390150   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:02.390164   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:02.403115   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:02.403134   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:02.461237   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:02.461249   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:02.461256   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:02.483492   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:02.483507   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:04.529393   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045900904s)
	I0222 21:25:04.529501   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:04.529508   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:03.441496   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:05.941578   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:07.070437   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:07.200638   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:07.221463   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.221478   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:07.221554   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:07.241064   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.241078   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:07.241148   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:07.260839   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.260853   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:07.260933   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:07.280771   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.280785   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:07.280856   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:07.301947   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.301961   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:07.302032   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:07.321773   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.321787   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:07.321858   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:07.341792   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.341805   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:07.341875   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:07.360670   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.360683   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:07.360751   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:07.381436   21529 logs.go:278] 0 containers: []
	W0222 21:25:07.381450   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:07.381457   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:07.381465   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:07.422194   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:07.422210   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:07.436117   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:07.436132   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:07.493678   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:07.493690   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:07.493698   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:07.516685   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:07.516700   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:09.565938   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049253068s)
	I0222 21:25:08.442924   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:10.941485   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:12.066180   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:12.199655   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:12.219039   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.219052   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:12.219122   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:12.239520   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.239534   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:12.239604   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:12.259176   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.259192   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:12.259261   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:12.278594   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.278607   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:12.278679   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:12.298855   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.298868   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:12.298935   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:12.319489   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.319502   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:12.319570   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:12.339947   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.339964   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:12.340044   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:12.358810   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.358825   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:12.358895   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:12.378133   21529 logs.go:278] 0 containers: []
	W0222 21:25:12.378147   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:12.378155   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:12.378162   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:12.418990   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:12.419006   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:12.433288   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:12.433304   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:12.491097   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:12.491110   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:12.491132   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:12.514381   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:12.514395   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:14.561476   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047096367s)
	I0222 21:25:13.441275   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:15.941159   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:17.061774   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:17.200110   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:17.219377   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.219390   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:17.219464   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:17.238644   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.238659   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:17.238730   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:17.257690   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.257703   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:17.257775   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:17.277668   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.277683   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:17.277754   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:17.298237   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.298251   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:17.298324   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:17.320587   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.320601   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:17.320676   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:17.342983   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.343021   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:17.343103   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:17.363594   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.363608   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:17.363679   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:17.387353   21529 logs.go:278] 0 containers: []
	W0222 21:25:17.387367   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:17.387376   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:17.387384   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:17.399594   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:17.399610   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:17.458669   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:17.458690   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:17.458697   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:17.480809   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:17.480824   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:19.527929   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047121064s)
	I0222 21:25:19.528035   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:19.528042   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:17.941317   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:20.441586   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:22.441734   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:22.068079   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:22.199742   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:22.220599   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.220614   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:22.220691   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:22.239903   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.239917   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:22.239988   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:22.259512   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.259526   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:22.259599   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:22.278962   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.278977   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:22.279046   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:22.297741   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.297756   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:22.297828   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:22.317416   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.317431   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:22.317503   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:22.337010   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.337023   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:22.337093   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:22.357985   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.358000   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:22.358071   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:22.378102   21529 logs.go:278] 0 containers: []
	W0222 21:25:22.378117   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:22.378124   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:22.378135   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:22.390492   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:22.390506   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:22.447564   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:22.447577   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:22.447585   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:22.469681   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:22.469695   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:24.514401   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044720508s)
	I0222 21:25:24.514545   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:24.514553   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:24.940895   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:26.943403   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:27.052404   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:27.201547   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:27.223305   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.223319   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:27.223397   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:27.242507   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.242521   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:27.242592   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:27.262404   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.262418   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:27.262489   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:27.281495   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.281509   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:27.281577   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:27.301145   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.301160   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:27.301228   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:27.320256   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.320270   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:27.320340   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:27.338699   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.338712   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:27.338783   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:27.359231   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.359245   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:27.359314   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:27.378692   21529 logs.go:278] 0 containers: []
	W0222 21:25:27.378711   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:27.378722   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:27.378731   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:27.400611   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:27.400624   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:29.448836   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04822698s)
	I0222 21:25:29.448943   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:29.448951   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:29.488424   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:29.488437   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:29.501346   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:29.501359   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:29.556292   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:29.439789   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:31.442146   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:32.058052   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:32.199935   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:32.220875   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.220891   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:32.220965   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:32.241462   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.241477   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:32.241553   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:32.261226   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.261241   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:32.261318   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:32.280663   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.280679   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:32.280761   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:32.300926   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.300940   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:32.301011   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:32.321828   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.321843   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:32.321915   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:32.343584   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.343599   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:32.343669   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:32.364054   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.364068   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:32.364138   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:32.387117   21529 logs.go:278] 0 containers: []
	W0222 21:25:32.387131   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:32.387141   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:32.387148   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:32.429300   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:32.429315   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:32.442548   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:32.442563   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:32.498429   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:32.498455   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:32.498462   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:32.520869   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:32.520883   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:34.565803   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044931263s)
	I0222 21:25:33.940416   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:35.941564   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:37.068107   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:37.200026   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:25:37.221998   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.222012   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:25:37.222081   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:25:37.241293   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.241307   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:25:37.241376   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:25:37.261243   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.261256   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:25:37.261322   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:25:37.280484   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.280498   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:25:37.280568   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:25:37.300606   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.300627   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:25:37.300695   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:25:37.319554   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.319568   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:25:37.319642   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:25:37.339882   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.339895   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:25:37.339962   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:25:37.359288   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.359302   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:25:37.359374   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:25:37.378340   21529 logs.go:278] 0 containers: []
	W0222 21:25:37.378354   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:25:37.378361   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:25:37.378368   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:25:37.418393   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:25:37.418406   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:25:37.431958   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:25:37.431972   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:25:37.491212   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:25:37.491225   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:25:37.491234   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:25:37.512437   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:25:37.512452   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:25:39.558884   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046447278s)
	I0222 21:25:38.441664   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:40.941494   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:42.059173   21529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:25:42.199488   21529 kubeadm.go:637] restartCluster took 4m10.916925108s
	W0222 21:25:42.199575   21529 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0222 21:25:42.199594   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0222 21:25:42.610200   21529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:25:42.620222   21529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:25:42.628036   21529 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:25:42.628090   21529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:25:42.635544   21529 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:25:42.635579   21529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:25:42.687097   21529 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0222 21:25:42.687139   21529 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:25:42.854830   21529 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:25:42.854918   21529 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:25:42.855007   21529 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:25:43.012719   21529 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:25:43.013494   21529 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:25:43.020034   21529 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0222 21:25:43.089352   21529 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:25:43.110912   21529 out.go:204]   - Generating certificates and keys ...
	I0222 21:25:43.111006   21529 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:25:43.111065   21529 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:25:43.111184   21529 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0222 21:25:43.111247   21529 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0222 21:25:43.111378   21529 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0222 21:25:43.111457   21529 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0222 21:25:43.111515   21529 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0222 21:25:43.111592   21529 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0222 21:25:43.111671   21529 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0222 21:25:43.111738   21529 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0222 21:25:43.111774   21529 kubeadm.go:322] [certs] Using the existing "sa" key
	I0222 21:25:43.111821   21529 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:25:43.383566   21529 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:25:43.436537   21529 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:25:43.724236   21529 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:25:43.891703   21529 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:25:43.892290   21529 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:25:43.935494   21529 out.go:204]   - Booting up control plane ...
	I0222 21:25:43.935636   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:25:43.935745   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:25:43.935832   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:25:43.935947   21529 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:25:43.936125   21529 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:25:43.443319   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:45.940481   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:47.942695   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:50.440261   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:52.440481   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:54.942000   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:57.440360   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:25:59.440650   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:01.441024   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:03.941494   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:06.440622   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:08.939011   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:10.939771   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:12.940432   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:14.941897   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:17.439892   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:19.441306   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:21.940282   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:23.900942   21529 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 21:26:23.901760   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:26:23.902036   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:26:24.439240   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:26.441330   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:28.903377   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:26:28.903652   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:26:28.442310   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:30.941727   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:33.440000   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:35.440078   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:37.441031   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:38.904947   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:26:38.905177   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:26:39.939757   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:42.440667   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:44.940244   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:47.439744   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:49.440161   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:51.440236   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:53.939582   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:55.939965   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:26:58.906092   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:26:58.906316   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:26:58.441006   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:00.939568   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:02.940333   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:05.439685   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:07.439959   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:09.938742   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:11.939674   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:14.440848   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:16.939122   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:18.939690   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:21.439221   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:23.439760   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:25.440604   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:27.938481   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:30.439159   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:32.938124   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:34.939734   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:37.438760   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:38.907893   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:27:38.908166   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:27:38.908188   21529 kubeadm.go:322] 
	I0222 21:27:38.908251   21529 kubeadm.go:322] Unfortunately, an error has occurred:
	I0222 21:27:38.908301   21529 kubeadm.go:322] 	timed out waiting for the condition
	I0222 21:27:38.908307   21529 kubeadm.go:322] 
	I0222 21:27:38.908344   21529 kubeadm.go:322] This error is likely caused by:
	I0222 21:27:38.908400   21529 kubeadm.go:322] 	- The kubelet is not running
	I0222 21:27:38.908519   21529 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 21:27:38.908534   21529 kubeadm.go:322] 
	I0222 21:27:38.908659   21529 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 21:27:38.908697   21529 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0222 21:27:38.908740   21529 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0222 21:27:38.908753   21529 kubeadm.go:322] 
	I0222 21:27:38.908862   21529 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 21:27:38.908968   21529 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0222 21:27:38.909074   21529 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0222 21:27:38.909151   21529 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0222 21:27:38.909247   21529 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0222 21:27:38.909287   21529 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0222 21:27:38.911820   21529 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 21:27:38.911895   21529 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 21:27:38.912000   21529 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0222 21:27:38.912098   21529 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:27:38.912172   21529 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 21:27:38.912236   21529 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0222 21:27:38.912357   21529 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0222 21:27:38.912384   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0222 21:27:39.327021   21529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:27:39.337465   21529 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:27:39.337526   21529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:27:39.345156   21529 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:27:39.345175   21529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:27:39.392174   21529 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0222 21:27:39.392221   21529 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:27:39.558482   21529 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:27:39.558560   21529 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:27:39.558673   21529 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:27:39.717320   21529 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:27:39.718032   21529 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:27:39.724876   21529 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0222 21:27:39.796182   21529 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:27:39.817754   21529 out.go:204]   - Generating certificates and keys ...
	I0222 21:27:39.817896   21529 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:27:39.817972   21529 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:27:39.818058   21529 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0222 21:27:39.818127   21529 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0222 21:27:39.818235   21529 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0222 21:27:39.818334   21529 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0222 21:27:39.818421   21529 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0222 21:27:39.818474   21529 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0222 21:27:39.818551   21529 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0222 21:27:39.818632   21529 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0222 21:27:39.818666   21529 kubeadm.go:322] [certs] Using the existing "sa" key
	I0222 21:27:39.818716   21529 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:27:39.884743   21529 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:27:39.946621   21529 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:27:40.262279   21529 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:27:40.327151   21529 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:27:40.328024   21529 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:27:40.349755   21529 out.go:204]   - Booting up control plane ...
	I0222 21:27:40.349934   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:27:40.350156   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:27:40.350285   21529 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:27:40.350439   21529 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:27:40.350825   21529 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:27:39.438883   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:41.939196   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:44.439095   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:46.439219   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:48.440505   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:50.938989   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:53.439623   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:55.939090   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:27:58.439313   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:00.939741   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:02.940406   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:05.438817   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:07.439629   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:09.938915   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:12.437634   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:14.439420   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:16.938348   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:20.339172   21529 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0222 21:28:20.340259   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:28:20.340453   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:28:18.938535   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:21.438575   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:25.340823   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:28:25.340978   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:28:23.937735   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:26.438919   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:28.939181   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:30.939399   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:35.342169   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:28:35.342414   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:28:32.940342   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:35.438225   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:37.439222   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:39.938328   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:42.439944   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:44.938887   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:47.439018   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:49.938661   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:52.437633   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:55.342655   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:28:55.342827   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:28:54.938526   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:57.438753   22044 pod_ready.go:102] pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace has status "Ready":"False"
	I0222 21:28:59.431638   22044 pod_ready.go:81] duration metric: took 4m0.006285379s waiting for pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace to be "Ready" ...
	E0222 21:28:59.431669   22044 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7997d45854-nqsq8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0222 21:28:59.431688   22044 pod_ready.go:38] duration metric: took 4m9.380084624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 21:28:59.431718   22044 kubeadm.go:637] restartCluster took 4m26.029408009s
	W0222 21:28:59.431874   22044 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0222 21:28:59.431908   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0222 21:29:03.710431   22044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.278564122s)
	I0222 21:29:03.710501   22044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:29:03.720367   22044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:29:03.728847   22044 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0222 21:29:03.728899   22044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:29:03.736560   22044 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0222 21:29:03.736742   22044 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0222 21:29:03.788681   22044 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0222 21:29:03.788783   22044 kubeadm.go:322] [preflight] Running pre-flight checks
	I0222 21:29:03.898632   22044 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0222 21:29:03.898799   22044 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0222 21:29:03.898906   22044 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0222 21:29:04.031778   22044 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0222 21:29:04.054019   22044 out.go:204]   - Generating certificates and keys ...
	I0222 21:29:04.054124   22044 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0222 21:29:04.054184   22044 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0222 21:29:04.054279   22044 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0222 21:29:04.054342   22044 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0222 21:29:04.054437   22044 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0222 21:29:04.054499   22044 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0222 21:29:04.054549   22044 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0222 21:29:04.054601   22044 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0222 21:29:04.054684   22044 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0222 21:29:04.054743   22044 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0222 21:29:04.054774   22044 kubeadm.go:322] [certs] Using the existing "sa" key
	I0222 21:29:04.054821   22044 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0222 21:29:04.073133   22044 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0222 21:29:04.185851   22044 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0222 21:29:04.398267   22044 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0222 21:29:04.505032   22044 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0222 21:29:04.517061   22044 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0222 21:29:04.517766   22044 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0222 21:29:04.517834   22044 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0222 21:29:04.594714   22044 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0222 21:29:04.616526   22044 out.go:204]   - Booting up control plane ...
	I0222 21:29:04.616625   22044 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0222 21:29:04.616702   22044 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0222 21:29:04.616783   22044 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0222 21:29:04.616853   22044 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0222 21:29:04.616986   22044 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0222 21:29:10.102385   22044 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.501829 seconds
	I0222 21:29:10.102587   22044 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0222 21:29:10.111640   22044 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0222 21:29:10.630056   22044 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0222 21:29:10.630205   22044 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-677000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0222 21:29:11.143852   22044 kubeadm.go:322] [bootstrap-token] Using token: 5xj618.violiape8c7lahrg
	I0222 21:29:11.181356   22044 out.go:204]   - Configuring RBAC rules ...
	I0222 21:29:11.181613   22044 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0222 21:29:11.185140   22044 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0222 21:29:11.223952   22044 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0222 21:29:11.226230   22044 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0222 21:29:11.229259   22044 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0222 21:29:11.231730   22044 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0222 21:29:11.240039   22044 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0222 21:29:11.402377   22044 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0222 21:29:11.632762   22044 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0222 21:29:11.633563   22044 kubeadm.go:322] 
	I0222 21:29:11.633651   22044 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0222 21:29:11.633663   22044 kubeadm.go:322] 
	I0222 21:29:11.633745   22044 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0222 21:29:11.633755   22044 kubeadm.go:322] 
	I0222 21:29:11.633782   22044 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0222 21:29:11.633867   22044 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0222 21:29:11.633944   22044 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0222 21:29:11.633957   22044 kubeadm.go:322] 
	I0222 21:29:11.634014   22044 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0222 21:29:11.634028   22044 kubeadm.go:322] 
	I0222 21:29:11.634065   22044 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0222 21:29:11.634069   22044 kubeadm.go:322] 
	I0222 21:29:11.634149   22044 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0222 21:29:11.634208   22044 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0222 21:29:11.634272   22044 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0222 21:29:11.634276   22044 kubeadm.go:322] 
	I0222 21:29:11.634357   22044 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0222 21:29:11.634427   22044 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0222 21:29:11.634435   22044 kubeadm.go:322] 
	I0222 21:29:11.634517   22044 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5xj618.violiape8c7lahrg \
	I0222 21:29:11.634624   22044 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf \
	I0222 21:29:11.634642   22044 kubeadm.go:322] 	--control-plane 
	I0222 21:29:11.634649   22044 kubeadm.go:322] 
	I0222 21:29:11.634747   22044 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0222 21:29:11.634755   22044 kubeadm.go:322] 
	I0222 21:29:11.634815   22044 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5xj618.violiape8c7lahrg \
	I0222 21:29:11.634939   22044 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:430b5988e125a102740e991bc04f120df9a4d7a8473ad3af9c2079587f375bbf 
	I0222 21:29:11.640193   22044 kubeadm.go:322] W0223 05:29:03.783591    9058 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0222 21:29:11.640323   22044 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0222 21:29:11.640473   22044 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:29:11.640484   22044 cni.go:84] Creating CNI manager for ""
	I0222 21:29:11.640496   22044 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:29:11.661833   22044 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0222 21:29:11.720070   22044 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0222 21:29:11.739148   22044 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0222 21:29:11.753449   22044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0222 21:29:11.753523   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:11.753523   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321 minikube.k8s.io/name=embed-certs-677000 minikube.k8s.io/updated_at=2023_02_22T21_29_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:11.761546   22044 ops.go:34] apiserver oom_adj: -16
	I0222 21:29:11.930133   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:12.497383   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:12.997736   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:13.498278   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:13.997646   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:14.497796   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:14.997961   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:15.499384   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:15.997764   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:16.497457   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:16.997591   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:17.497319   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:17.997390   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:18.497439   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:18.999387   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:19.497474   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:19.998357   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:20.497615   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:20.997393   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:21.498032   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:21.997471   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:22.497331   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:22.999367   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:23.498924   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:23.997826   22044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0222 21:29:24.069030   22044 kubeadm.go:1073] duration metric: took 12.315735906s to wait for elevateKubeSystemPrivileges.
	I0222 21:29:24.069047   22044 kubeadm.go:403] StartCluster complete in 4m50.694935334s
	I0222 21:29:24.069067   22044 settings.go:142] acquiring lock: {Name:mk09b0ae3061a5d1df7256089aea48f15d65cbf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:29:24.069156   22044 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:29:24.069923   22044 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:29:24.070185   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0222 21:29:24.070205   22044 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0222 21:29:24.070266   22044 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-677000"
	I0222 21:29:24.070294   22044 addons.go:227] Setting addon storage-provisioner=true in "embed-certs-677000"
	I0222 21:29:24.070296   22044 addons.go:65] Setting dashboard=true in profile "embed-certs-677000"
	W0222 21:29:24.070302   22044 addons.go:236] addon storage-provisioner should already be in state true
	I0222 21:29:24.070312   22044 addons.go:227] Setting addon dashboard=true in "embed-certs-677000"
	I0222 21:29:24.070311   22044 addons.go:65] Setting default-storageclass=true in profile "embed-certs-677000"
	W0222 21:29:24.070323   22044 addons.go:236] addon dashboard should already be in state true
	I0222 21:29:24.070331   22044 config.go:182] Loaded profile config "embed-certs-677000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:29:24.070339   22044 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-677000"
	I0222 21:29:24.070349   22044 host.go:66] Checking if "embed-certs-677000" exists ...
	I0222 21:29:24.070330   22044 addons.go:65] Setting metrics-server=true in profile "embed-certs-677000"
	I0222 21:29:24.070374   22044 host.go:66] Checking if "embed-certs-677000" exists ...
	I0222 21:29:24.070386   22044 addons.go:227] Setting addon metrics-server=true in "embed-certs-677000"
	W0222 21:29:24.070395   22044 addons.go:236] addon metrics-server should already be in state true
	I0222 21:29:24.070421   22044 host.go:66] Checking if "embed-certs-677000" exists ...
	I0222 21:29:24.070711   22044 cli_runner.go:164] Run: docker container inspect embed-certs-677000 --format={{.State.Status}}
	I0222 21:29:24.070735   22044 cli_runner.go:164] Run: docker container inspect embed-certs-677000 --format={{.State.Status}}
	I0222 21:29:24.070790   22044 cli_runner.go:164] Run: docker container inspect embed-certs-677000 --format={{.State.Status}}
	I0222 21:29:24.070862   22044 cli_runner.go:164] Run: docker container inspect embed-certs-677000 --format={{.State.Status}}
	I0222 21:29:24.217281   22044 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0222 21:29:24.197379   22044 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 21:29:24.238242   22044 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0222 21:29:24.275639   22044 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0222 21:29:24.275685   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0222 21:29:24.313353   22044 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 21:29:24.334058   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0222 21:29:24.334150   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:29:24.334154   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:29:24.355151   22044 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0222 21:29:24.378462   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0222 21:29:24.378525   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0222 21:29:24.378698   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:29:24.381496   22044 addons.go:227] Setting addon default-storageclass=true in "embed-certs-677000"
	W0222 21:29:24.381525   22044 addons.go:236] addon default-storageclass should already be in state true
	I0222 21:29:24.381549   22044 host.go:66] Checking if "embed-certs-677000" exists ...
	I0222 21:29:24.382155   22044 cli_runner.go:164] Run: docker container inspect embed-certs-677000 --format={{.State.Status}}
	I0222 21:29:24.399481   22044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0222 21:29:24.437961   22044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54877 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/embed-certs-677000/id_rsa Username:docker}
	I0222 21:29:24.438177   22044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54877 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/embed-certs-677000/id_rsa Username:docker}
	I0222 21:29:24.471339   22044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54877 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/embed-certs-677000/id_rsa Username:docker}
	I0222 21:29:24.473031   22044 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0222 21:29:24.473081   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0222 21:29:24.473197   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:29:24.535626   22044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54877 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/embed-certs-677000/id_rsa Username:docker}
	I0222 21:29:24.633652   22044 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-677000" context rescaled to 1 replicas
	I0222 21:29:24.633686   22044 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 21:29:24.657224   22044 out.go:177] * Verifying Kubernetes components...
	I0222 21:29:24.648722   22044 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0222 21:29:24.657255   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0222 21:29:24.693137   22044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:29:24.744152   22044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 21:29:24.756239   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0222 21:29:24.756261   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0222 21:29:24.757842   22044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0222 21:29:24.762087   22044 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0222 21:29:24.762103   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0222 21:29:24.847074   22044 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0222 21:29:24.847094   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0222 21:29:24.849925   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0222 21:29:24.849946   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0222 21:29:24.936409   22044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0222 21:29:24.947225   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0222 21:29:24.947242   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0222 21:29:25.042146   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0222 21:29:25.042168   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0222 21:29:25.131361   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0222 21:29:25.131381   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0222 21:29:25.243906   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0222 21:29:25.243925   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0222 21:29:25.333371   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0222 21:29:25.333390   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0222 21:29:25.437246   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0222 21:29:25.437283   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0222 21:29:25.459583   22044 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0222 21:29:25.459600   22044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0222 21:29:25.538453   22044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0222 21:29:26.328917   22044 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.929408225s)
	I0222 21:29:26.328950   22044 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0222 21:29:26.328965   22044 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.635826335s)
	I0222 21:29:26.329110   22044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-677000
	I0222 21:29:26.400756   22044 node_ready.go:35] waiting up to 6m0s for node "embed-certs-677000" to be "Ready" ...
	I0222 21:29:26.441329   22044 node_ready.go:49] node "embed-certs-677000" has status "Ready":"True"
	I0222 21:29:26.441349   22044 node_ready.go:38] duration metric: took 40.564632ms waiting for node "embed-certs-677000" to be "Ready" ...
	I0222 21:29:26.441367   22044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 21:29:26.449909   22044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-rd49b" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:26.664500   22044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.920343339s)
	I0222 21:29:26.664547   22044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.906699594s)
	I0222 21:29:26.734928   22044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.798500492s)
	I0222 21:29:26.734962   22044 addons.go:457] Verifying addon metrics-server=true in "embed-certs-677000"
	I0222 21:29:26.968164   22044 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.429694163s)
	I0222 21:29:26.991174   22044 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-677000 addons enable metrics-server	
	
	
	I0222 21:29:27.012455   22044 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0222 21:29:27.034066   22044 addons.go:492] enable addons completed in 2.963905394s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0222 21:29:27.473789   22044 pod_ready.go:92] pod "coredns-787d4945fb-rd49b" in "kube-system" namespace has status "Ready":"True"
	I0222 21:29:27.473806   22044 pod_ready.go:81] duration metric: took 1.02389111s waiting for pod "coredns-787d4945fb-rd49b" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:27.473815   22044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-tzngs" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:27.478341   22044 pod_ready.go:92] pod "coredns-787d4945fb-tzngs" in "kube-system" namespace has status "Ready":"True"
	I0222 21:29:27.478350   22044 pod_ready.go:81] duration metric: took 4.529983ms waiting for pod "coredns-787d4945fb-tzngs" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:27.478356   22044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:27.482617   22044 pod_ready.go:92] pod "etcd-embed-certs-677000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:29:27.482625   22044 pod_ready.go:81] duration metric: took 4.264549ms waiting for pod "etcd-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:27.482632   22044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:27.487098   22044 pod_ready.go:92] pod "kube-apiserver-embed-certs-677000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:29:27.487106   22044 pod_ready.go:81] duration metric: took 4.46935ms waiting for pod "kube-apiserver-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:27.487116   22044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:27.605492   22044 pod_ready.go:92] pod "kube-controller-manager-embed-certs-677000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:29:27.605502   22044 pod_ready.go:81] duration metric: took 118.383385ms waiting for pod "kube-controller-manager-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:27.605509   22044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8djm7" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:28.028068   22044 pod_ready.go:92] pod "kube-proxy-8djm7" in "kube-system" namespace has status "Ready":"True"
	I0222 21:29:28.028084   22044 pod_ready.go:81] duration metric: took 422.575974ms waiting for pod "kube-proxy-8djm7" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:28.028093   22044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:28.404214   22044 pod_ready.go:92] pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:29:28.404225   22044 pod_ready.go:81] duration metric: took 376.131844ms waiting for pod "kube-scheduler-embed-certs-677000" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:28.404232   22044 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7997d45854-2cbxp" in "kube-system" namespace to be "Ready" ...
	I0222 21:29:30.830188   22044 pod_ready.go:102] pod "metrics-server-7997d45854-2cbxp" in "kube-system" namespace has status "Ready":"False"
	I0222 21:29:32.831054   22044 pod_ready.go:102] pod "metrics-server-7997d45854-2cbxp" in "kube-system" namespace has status "Ready":"False"
	I0222 21:29:35.344209   21529 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0222 21:29:35.344445   21529 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0222 21:29:35.344456   21529 kubeadm.go:322] 
	I0222 21:29:35.344506   21529 kubeadm.go:322] Unfortunately, an error has occurred:
	I0222 21:29:35.344552   21529 kubeadm.go:322] 	timed out waiting for the condition
	I0222 21:29:35.344560   21529 kubeadm.go:322] 
	I0222 21:29:35.344600   21529 kubeadm.go:322] This error is likely caused by:
	I0222 21:29:35.344635   21529 kubeadm.go:322] 	- The kubelet is not running
	I0222 21:29:35.344768   21529 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0222 21:29:35.344783   21529 kubeadm.go:322] 
	I0222 21:29:35.344897   21529 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0222 21:29:35.344941   21529 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0222 21:29:35.344982   21529 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0222 21:29:35.344988   21529 kubeadm.go:322] 
	I0222 21:29:35.345111   21529 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0222 21:29:35.345221   21529 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0222 21:29:35.345321   21529 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0222 21:29:35.345379   21529 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0222 21:29:35.345468   21529 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0222 21:29:35.345511   21529 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0222 21:29:35.347495   21529 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0222 21:29:35.347580   21529 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0222 21:29:35.347697   21529 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0222 21:29:35.347781   21529 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0222 21:29:35.347866   21529 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0222 21:29:35.347921   21529 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0222 21:29:35.347945   21529 kubeadm.go:403] StartCluster complete in 8m4.096882529s
	I0222 21:29:35.348037   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0222 21:29:35.368504   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.368518   21529 logs.go:280] No container was found matching "kube-apiserver"
	I0222 21:29:35.368608   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0222 21:29:35.390318   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.390332   21529 logs.go:280] No container was found matching "etcd"
	I0222 21:29:35.390403   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0222 21:29:35.410592   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.410608   21529 logs.go:280] No container was found matching "coredns"
	I0222 21:29:35.410678   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0222 21:29:35.429681   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.429696   21529 logs.go:280] No container was found matching "kube-scheduler"
	I0222 21:29:35.429766   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0222 21:29:35.451049   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.451063   21529 logs.go:280] No container was found matching "kube-proxy"
	I0222 21:29:35.451140   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0222 21:29:35.475092   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.475107   21529 logs.go:280] No container was found matching "kube-controller-manager"
	I0222 21:29:35.475191   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0222 21:29:35.496009   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.496024   21529 logs.go:280] No container was found matching "kindnet"
	I0222 21:29:35.496097   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0222 21:29:35.515551   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.515569   21529 logs.go:280] No container was found matching "storage-provisioner"
	I0222 21:29:35.515644   21529 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0222 21:29:35.537284   21529 logs.go:278] 0 containers: []
	W0222 21:29:35.537299   21529 logs.go:280] No container was found matching "kubernetes-dashboard"
	I0222 21:29:35.537307   21529 logs.go:124] Gathering logs for kubelet ...
	I0222 21:29:35.537315   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0222 21:29:35.582497   21529 logs.go:124] Gathering logs for dmesg ...
	I0222 21:29:35.582517   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0222 21:29:35.599453   21529 logs.go:124] Gathering logs for describe nodes ...
	I0222 21:29:35.599471   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0222 21:29:35.668710   21529 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0222 21:29:35.668723   21529 logs.go:124] Gathering logs for Docker ...
	I0222 21:29:35.668732   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0222 21:29:35.695984   21529 logs.go:124] Gathering logs for container status ...
	I0222 21:29:35.696004   21529 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0222 21:29:37.746902   21529 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05091002s)
	W0222 21:29:37.747059   21529 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0222 21:29:37.747081   21529 out.go:239] * 
	W0222 21:29:37.747207   21529 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 21:29:37.747229   21529 out.go:239] * 
	W0222 21:29:37.747988   21529 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0222 21:29:37.831476   21529 out.go:177] 
	W0222 21:29:37.873442   21529 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0222 21:29:37.873536   21529 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0222 21:29:37.873581   21529 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0222 21:29:35.310675   22044 pod_ready.go:102] pod "metrics-server-7997d45854-2cbxp" in "kube-system" namespace has status "Ready":"False"
	I0222 21:29:37.312232   22044 pod_ready.go:102] pod "metrics-server-7997d45854-2cbxp" in "kube-system" namespace has status "Ready":"False"
	I0222 21:29:37.894540   21529 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 05:21:27 UTC, end at Thu 2023-02-23 05:29:39 UTC. --
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.360651674Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.361101847Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.361152173Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362113546Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362160116Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362184831Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362195048Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362224721Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362304394Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362362371Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362385432Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362403799Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362704899Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362772253Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362790217Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.363289477Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.370809711Z" level=info msg="Loading containers: start."
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.448406285Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.481550543Z" level=info msg="Loading containers: done."
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.490307846Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.490373476Z" level=info msg="Daemon has completed initialization"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.513109070Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.517014469Z" level=info msg="API listen on [::]:2376"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.523278664Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-02-23T05:29:41Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Feb23 05:11] hrtimer: interrupt took 1057500 ns
	
	* 
	* ==> kernel <==
	*  05:29:42 up  1:28,  0 users,  load average: 0.52, 1.09, 1.47
	Linux old-k8s-version-865000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 05:21:27 UTC, end at Thu 2023-02-23 05:29:42 UTC. --
	Feb 23 05:29:40 old-k8s-version-865000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 05:29:41 old-k8s-version-865000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Feb 23 05:29:41 old-k8s-version-865000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 05:29:41 old-k8s-version-865000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 05:29:41 old-k8s-version-865000 kubelet[14250]: I0223 05:29:41.616115   14250 server.go:410] Version: v1.16.0
	Feb 23 05:29:41 old-k8s-version-865000 kubelet[14250]: I0223 05:29:41.616458   14250 plugins.go:100] No cloud provider specified.
	Feb 23 05:29:41 old-k8s-version-865000 kubelet[14250]: I0223 05:29:41.616507   14250 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 05:29:41 old-k8s-version-865000 kubelet[14250]: I0223 05:29:41.618271   14250 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 05:29:41 old-k8s-version-865000 kubelet[14250]: W0223 05:29:41.619490   14250 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 05:29:41 old-k8s-version-865000 kubelet[14250]: W0223 05:29:41.619564   14250 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 05:29:41 old-k8s-version-865000 kubelet[14250]: F0223 05:29:41.619598   14250 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 05:29:41 old-k8s-version-865000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 05:29:41 old-k8s-version-865000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 05:29:42 old-k8s-version-865000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 163.
	Feb 23 05:29:42 old-k8s-version-865000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 05:29:42 old-k8s-version-865000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 05:29:42 old-k8s-version-865000 kubelet[14290]: I0223 05:29:42.367030   14290 server.go:410] Version: v1.16.0
	Feb 23 05:29:42 old-k8s-version-865000 kubelet[14290]: I0223 05:29:42.367205   14290 plugins.go:100] No cloud provider specified.
	Feb 23 05:29:42 old-k8s-version-865000 kubelet[14290]: I0223 05:29:42.367215   14290 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 05:29:42 old-k8s-version-865000 kubelet[14290]: I0223 05:29:42.368999   14290 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 05:29:42 old-k8s-version-865000 kubelet[14290]: W0223 05:29:42.369712   14290 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 05:29:42 old-k8s-version-865000 kubelet[14290]: W0223 05:29:42.369778   14290 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 05:29:42 old-k8s-version-865000 kubelet[14290]: F0223 05:29:42.369802   14290 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 05:29:42 old-k8s-version-865000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 05:29:42 old-k8s-version-865000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 21:29:42.163899   22446 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 2 (418.981776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-865000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (497.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0222 21:29:43.079847    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 21:29:45.131869    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:30:01.508643    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:30:03.150406    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:30:15.065855    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:30:44.077210    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:30:54.270916    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:30:59.517467    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:31:42.483458    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:32:17.323997    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:32:17.661765    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:32:19.847836    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:32:34.121652    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:32:45.365572    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:33:05.532176    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:33:11.838948    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:34:08.107866    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:34:34.886056    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:34:43.075778    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 21:34:45.128037    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:35:03.290401    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:35:31.308654    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:35:44.218917    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:35:54.414327    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:35:59.659658    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:36:06.272300    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:36:08.327611    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:36:42.627594    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:37:07.273963    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:37:17.806261    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:37:19.992883    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:37:34.266924    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:38:11.984393    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:38:52.165257    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:39:08.255993    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 2 (397.694743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-865000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-865000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-865000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c",
	        "Created": "2023-02-23T05:15:31.417090555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T05:21:27.411292149Z",
	            "FinishedAt": "2023-02-23T05:21:24.519545355Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hosts",
	        "LogPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c-json.log",
	        "Name": "/old-k8s-version-865000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-865000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-865000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-865000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-865000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-865000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "351da03ebb5828b9ae09ef98a1a92ca983c146b1286e410710fdcd0e8b997b44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54722"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54723"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54724"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54725"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54726"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/351da03ebb58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-865000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "106a8b195383",
	                        "old-k8s-version-865000"
	                    ],
	                    "NetworkID": "947893b68cb410e9e5982aa5b8afeae1844c1ff30155168ea70efca5bffdb638",
	                    "EndpointID": "514d220541551db5b6e5df3d10fa1937f8cfad31f95838367761a5c304074af5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 2 (405.916209ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-865000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-865000 logs -n 25: (3.479159139s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-865000   | old-k8s-version-865000       | jenkins | v1.29.0 | 22 Feb 23 21:19 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-865000                         | old-k8s-version-865000       | jenkins | v1.29.0 | 22 Feb 23 21:21 PST | 22 Feb 23 21:21 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-865000        | old-k8s-version-865000       | jenkins | v1.29.0 | 22 Feb 23 21:21 PST | 22 Feb 23 21:21 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-865000                         | old-k8s-version-865000       | jenkins | v1.29.0 | 22 Feb 23 21:21 PST |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-081000 sudo                         | no-preload-081000            | jenkins | v1.29.0 | 22 Feb 23 21:22 PST | 22 Feb 23 21:22 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-081000                              | no-preload-081000            | jenkins | v1.29.0 | 22 Feb 23 21:22 PST | 22 Feb 23 21:22 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-081000                              | no-preload-081000            | jenkins | v1.29.0 | 22 Feb 23 21:23 PST | 22 Feb 23 21:23 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-081000                              | no-preload-081000            | jenkins | v1.29.0 | 22 Feb 23 21:23 PST | 22 Feb 23 21:23 PST |
	| delete  | -p no-preload-081000                              | no-preload-081000            | jenkins | v1.29.0 | 22 Feb 23 21:23 PST | 22 Feb 23 21:23 PST |
	| start   | -p embed-certs-677000                             | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:23 PST | 22 Feb 23 21:24 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-677000       | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:24 PST | 22 Feb 23 21:24 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-677000                             | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:24 PST | 22 Feb 23 21:24 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-677000            | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:24 PST | 22 Feb 23 21:24 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-677000                             | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:24 PST | 22 Feb 23 21:33 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-677000 sudo                        | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-677000                             | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-677000                             | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-677000                             | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	| delete  | -p embed-certs-677000                             | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	| delete  | -p                                                | disable-driver-mounts-986000 | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	|         | disable-driver-mounts-986000                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:34 PST |
	|         | default-k8s-diff-port-783000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:34 PST | 22 Feb 23 21:34 PST |
	|         | default-k8s-diff-port-783000                      |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:34 PST | 22 Feb 23 21:35 PST |
	|         | default-k8s-diff-port-783000                      |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-783000  | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:35 PST | 22 Feb 23 21:35 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:35 PST |                     |
	|         | default-k8s-diff-port-783000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/22 21:35:09
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0222 21:35:09.769077   23082 out.go:296] Setting OutFile to fd 1 ...
	I0222 21:35:09.769247   23082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:35:09.769252   23082 out.go:309] Setting ErrFile to fd 2...
	I0222 21:35:09.769256   23082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:35:09.769361   23082 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 21:35:09.770697   23082 out.go:303] Setting JSON to false
	I0222 21:35:09.789309   23082 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5684,"bootTime":1677124825,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 21:35:09.789388   23082 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 21:35:09.810804   23082 out.go:177] * [default-k8s-diff-port-783000] minikube v1.29.0 on Darwin 13.2
	I0222 21:35:09.852956   23082 notify.go:220] Checking for updates...
	I0222 21:35:09.874838   23082 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 21:35:09.896802   23082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:35:09.918791   23082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 21:35:09.940670   23082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 21:35:09.961570   23082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 21:35:09.982667   23082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 21:35:10.004458   23082 config.go:182] Loaded profile config "default-k8s-diff-port-783000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:35:10.005105   23082 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 21:35:10.068170   23082 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 21:35:10.068290   23082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:35:10.216721   23082 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:35:09.973849283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:35:10.259188   23082 out.go:177] * Using the docker driver based on existing profile
	I0222 21:35:10.280295   23082 start.go:296] selected driver: docker
	I0222 21:35:10.280310   23082 start.go:857] validating driver "docker" against &{Name:default-k8s-diff-port-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-783000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:35:10.280367   23082 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 21:35:10.282907   23082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:35:10.429097   23082 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:35:10.189335845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:35:10.429273   23082 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0222 21:35:10.429293   23082 cni.go:84] Creating CNI manager for ""
	I0222 21:35:10.429306   23082 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:35:10.429313   23082 start_flags.go:319] config:
	{Name:default-k8s-diff-port-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-783000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:35:10.451160   23082 out.go:177] * Starting control plane node default-k8s-diff-port-783000 in cluster default-k8s-diff-port-783000
	I0222 21:35:10.473111   23082 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 21:35:10.494986   23082 out.go:177] * Pulling base image ...
	I0222 21:35:10.538078   23082 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 21:35:10.538120   23082 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 21:35:10.538219   23082 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0222 21:35:10.538249   23082 cache.go:57] Caching tarball of preloaded images
	I0222 21:35:10.538489   23082 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 21:35:10.538505   23082 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0222 21:35:10.539580   23082 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/config.json ...
	I0222 21:35:10.594529   23082 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 21:35:10.594552   23082 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 21:35:10.594572   23082 cache.go:193] Successfully downloaded all kic artifacts
	I0222 21:35:10.594609   23082 start.go:364] acquiring machines lock for default-k8s-diff-port-783000: {Name:mk9839757843162cb2127e4a9287d471c96f80a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 21:35:10.594701   23082 start.go:368] acquired machines lock for "default-k8s-diff-port-783000" in 74.877µs
	I0222 21:35:10.594728   23082 start.go:96] Skipping create...Using existing machine configuration
	I0222 21:35:10.594735   23082 fix.go:55] fixHost starting: 
	I0222 21:35:10.594979   23082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-783000 --format={{.State.Status}}
	I0222 21:35:10.651415   23082 fix.go:103] recreateIfNeeded on default-k8s-diff-port-783000: state=Stopped err=<nil>
	W0222 21:35:10.651442   23082 fix.go:129] unexpected machine state, will restart: <nil>
	I0222 21:35:10.695082   23082 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-783000" ...
	I0222 21:35:10.716273   23082 cli_runner.go:164] Run: docker start default-k8s-diff-port-783000
	I0222 21:35:11.056453   23082 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-783000 --format={{.State.Status}}
	I0222 21:35:11.119590   23082 kic.go:426] container "default-k8s-diff-port-783000" state is running.
	I0222 21:35:11.120156   23082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-783000
	I0222 21:35:11.186890   23082 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/config.json ...
	I0222 21:35:11.187318   23082 machine.go:88] provisioning docker machine ...
	I0222 21:35:11.187364   23082 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-783000"
	I0222 21:35:11.187509   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:11.253234   23082 main.go:141] libmachine: Using SSH client type: native
	I0222 21:35:11.253700   23082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 55368 <nil> <nil>}
	I0222 21:35:11.253719   23082 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-783000 && echo "default-k8s-diff-port-783000" | sudo tee /etc/hostname
	I0222 21:35:11.420978   23082 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-783000
	
	I0222 21:35:11.421073   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:11.484741   23082 main.go:141] libmachine: Using SSH client type: native
	I0222 21:35:11.485110   23082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 55368 <nil> <nil>}
	I0222 21:35:11.485125   23082 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-783000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-783000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-783000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 21:35:11.621549   23082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:35:11.621577   23082 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 21:35:11.621594   23082 ubuntu.go:177] setting up certificates
	I0222 21:35:11.621604   23082 provision.go:83] configureAuth start
	I0222 21:35:11.621682   23082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-783000
	I0222 21:35:11.681471   23082 provision.go:138] copyHostCerts
	I0222 21:35:11.681644   23082 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 21:35:11.681656   23082 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 21:35:11.681777   23082 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 21:35:11.681997   23082 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 21:35:11.682003   23082 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 21:35:11.682072   23082 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 21:35:11.682221   23082 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 21:35:11.682227   23082 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 21:35:11.682296   23082 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 21:35:11.682422   23082 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-783000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-783000]
	I0222 21:35:11.799317   23082 provision.go:172] copyRemoteCerts
	I0222 21:35:11.799389   23082 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 21:35:11.799456   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:11.859771   23082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55368 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/default-k8s-diff-port-783000/id_rsa Username:docker}
	I0222 21:35:11.955710   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0222 21:35:11.973291   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 21:35:11.990328   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0222 21:35:12.007776   23082 provision.go:86] duration metric: configureAuth took 386.14791ms
	I0222 21:35:12.007792   23082 ubuntu.go:193] setting minikube options for container-runtime
	I0222 21:35:12.007972   23082 config.go:182] Loaded profile config "default-k8s-diff-port-783000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:35:12.008092   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:12.067705   23082 main.go:141] libmachine: Using SSH client type: native
	I0222 21:35:12.068050   23082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 55368 <nil> <nil>}
	I0222 21:35:12.068059   23082 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 21:35:12.201455   23082 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 21:35:12.201467   23082 ubuntu.go:71] root file system type: overlay
	I0222 21:35:12.201555   23082 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 21:35:12.201638   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:12.260192   23082 main.go:141] libmachine: Using SSH client type: native
	I0222 21:35:12.260539   23082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 55368 <nil> <nil>}
	I0222 21:35:12.260588   23082 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 21:35:12.406667   23082 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 21:35:12.406784   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:12.466708   23082 main.go:141] libmachine: Using SSH client type: native
	I0222 21:35:12.467086   23082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 55368 <nil> <nil>}
	I0222 21:35:12.467099   23082 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 21:35:12.606340   23082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:35:12.606360   23082 machine.go:91] provisioned docker machine in 1.419002861s
	I0222 21:35:12.606371   23082 start.go:300] post-start starting for "default-k8s-diff-port-783000" (driver="docker")
	I0222 21:35:12.606378   23082 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 21:35:12.606446   23082 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 21:35:12.606495   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:12.664902   23082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55368 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/default-k8s-diff-port-783000/id_rsa Username:docker}
	I0222 21:35:12.761405   23082 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 21:35:12.765022   23082 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 21:35:12.765039   23082 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 21:35:12.765052   23082 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 21:35:12.765058   23082 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 21:35:12.765070   23082 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 21:35:12.765163   23082 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 21:35:12.765340   23082 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 21:35:12.765534   23082 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 21:35:12.773034   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:35:12.790654   23082 start.go:303] post-start completed in 184.254594ms
	I0222 21:35:12.790810   23082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 21:35:12.790861   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:12.852278   23082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55368 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/default-k8s-diff-port-783000/id_rsa Username:docker}
	I0222 21:35:12.945286   23082 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 21:35:12.949775   23082 fix.go:57] fixHost completed within 2.354987092s
	I0222 21:35:12.949787   23082 start.go:83] releasing machines lock for "default-k8s-diff-port-783000", held for 2.355027434s
	I0222 21:35:12.949874   23082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-783000
	I0222 21:35:13.007362   23082 ssh_runner.go:195] Run: cat /version.json
	I0222 21:35:13.007374   23082 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0222 21:35:13.007437   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:13.007447   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:13.071228   23082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55368 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/default-k8s-diff-port-783000/id_rsa Username:docker}
	I0222 21:35:13.071544   23082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55368 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/default-k8s-diff-port-783000/id_rsa Username:docker}
	I0222 21:35:13.219384   23082 ssh_runner.go:195] Run: systemctl --version
	I0222 21:35:13.224406   23082 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 21:35:13.230008   23082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 21:35:13.247517   23082 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 21:35:13.247607   23082 ssh_runner.go:195] Run: which cri-dockerd
	I0222 21:35:13.251946   23082 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0222 21:35:13.260016   23082 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0222 21:35:13.275065   23082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0222 21:35:13.282609   23082 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0222 21:35:13.282628   23082 start.go:485] detecting cgroup driver to use...
	I0222 21:35:13.282639   23082 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:35:13.282751   23082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:35:13.296518   23082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0222 21:35:13.305222   23082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 21:35:13.313885   23082 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 21:35:13.313940   23082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 21:35:13.322503   23082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:35:13.331535   23082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 21:35:13.340508   23082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:35:13.349084   23082 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 21:35:13.357901   23082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 21:35:13.366624   23082 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 21:35:13.373967   23082 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 21:35:13.381152   23082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:35:13.454217   23082 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 21:35:13.525017   23082 start.go:485] detecting cgroup driver to use...
	I0222 21:35:13.525038   23082 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:35:13.525108   23082 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 21:35:13.538066   23082 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 21:35:13.538137   23082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 21:35:13.549682   23082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:35:13.564826   23082 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 21:35:13.668909   23082 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 21:35:13.784025   23082 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 21:35:13.784044   23082 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 21:35:13.798198   23082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:35:13.888958   23082 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 21:35:14.162538   23082 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:35:14.238327   23082 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0222 21:35:14.307263   23082 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:35:14.380398   23082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:35:14.448232   23082 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0222 21:35:14.460561   23082 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0222 21:35:14.460713   23082 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0222 21:35:14.465075   23082 start.go:553] Will wait 60s for crictl version
	I0222 21:35:14.465118   23082 ssh_runner.go:195] Run: which crictl
	I0222 21:35:14.468725   23082 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0222 21:35:14.577659   23082 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0222 21:35:14.577735   23082 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:35:14.602796   23082 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:35:14.672940   23082 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0222 21:35:14.673119   23082 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-783000 dig +short host.docker.internal
	I0222 21:35:14.797642   23082 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 21:35:14.797756   23082 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 21:35:14.802259   23082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:35:14.812468   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:14.872375   23082 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 21:35:14.872455   23082 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:35:14.892616   23082 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0222 21:35:14.892633   23082 docker.go:560] Images already preloaded, skipping extraction
	I0222 21:35:14.892717   23082 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:35:14.913059   23082 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0222 21:35:14.913075   23082 cache_images.go:84] Images are preloaded, skipping loading
	I0222 21:35:14.913158   23082 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 21:35:14.939008   23082 cni.go:84] Creating CNI manager for ""
	I0222 21:35:14.939031   23082 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:35:14.939047   23082 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0222 21:35:14.939062   23082 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-783000 NodeName:default-k8s-diff-port-783000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 21:35:14.939166   23082 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-783000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 21:35:14.939261   23082 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-783000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-783000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0222 21:35:14.939330   23082 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0222 21:35:14.947483   23082 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 21:35:14.947546   23082 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 21:35:14.954943   23082 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (460 bytes)
	I0222 21:35:14.968494   23082 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 21:35:14.981642   23082 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0222 21:35:14.995376   23082 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0222 21:35:14.999919   23082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:35:15.009853   23082 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000 for IP: 192.168.67.2
	I0222 21:35:15.009872   23082 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:35:15.010121   23082 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 21:35:15.010212   23082 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 21:35:15.010319   23082 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.key
	I0222 21:35:15.010456   23082 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/apiserver.key.c7fa3a9e
	I0222 21:35:15.010535   23082 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/proxy-client.key
	I0222 21:35:15.010756   23082 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 21:35:15.010806   23082 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 21:35:15.010818   23082 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 21:35:15.010856   23082 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 21:35:15.010889   23082 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 21:35:15.010924   23082 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 21:35:15.010997   23082 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:35:15.011680   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 21:35:15.030104   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0222 21:35:15.048650   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 21:35:15.066962   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0222 21:35:15.085752   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 21:35:15.103921   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 21:35:15.121623   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 21:35:15.138995   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 21:35:15.156828   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 21:35:15.174520   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 21:35:15.192167   23082 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 21:35:15.210531   23082 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 21:35:15.224339   23082 ssh_runner.go:195] Run: openssl version
	I0222 21:35:15.230354   23082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 21:35:15.238682   23082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:35:15.242736   23082 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:35:15.242781   23082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:35:15.248729   23082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 21:35:15.256772   23082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 21:35:15.265158   23082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 21:35:15.269733   23082 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 21:35:15.269803   23082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 21:35:15.275641   23082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 21:35:15.283284   23082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 21:35:15.291920   23082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 21:35:15.296198   23082 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 21:35:15.296253   23082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 21:35:15.301838   23082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 21:35:15.309592   23082 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-783000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:35:15.309738   23082 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:35:15.330426   23082 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 21:35:15.338525   23082 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0222 21:35:15.338540   23082 kubeadm.go:633] restartCluster start
	I0222 21:35:15.338596   23082 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0222 21:35:15.345940   23082 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:15.346007   23082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-783000
	I0222 21:35:15.407366   23082 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-783000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:35:15.408079   23082 kubeconfig.go:146] "default-k8s-diff-port-783000" context is missing from /Users/jenkins/minikube-integration/15909-2664/kubeconfig - will repair!
	I0222 21:35:15.408428   23082 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:35:15.410043   23082 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0222 21:35:15.418217   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:15.418323   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:15.427255   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:15.928471   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:15.928585   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:15.939681   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:16.427390   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:16.427498   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:16.438142   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:16.927627   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:16.927724   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:16.937523   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:17.428174   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:17.428369   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:17.439495   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:17.929180   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:17.929268   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:17.939028   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:18.429097   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:18.429173   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:18.438845   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:18.927920   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:18.928045   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:18.938321   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:19.427927   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:19.428082   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:19.439051   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:19.928334   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:19.928462   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:19.938804   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:20.427455   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:20.427624   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:20.438293   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:20.929533   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:20.929772   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:20.940614   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:21.428216   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:21.428312   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:21.438596   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:21.928895   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:21.929094   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:21.939716   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:22.429516   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:22.429708   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:22.440552   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:22.927517   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:22.927597   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:22.937159   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:23.429425   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:23.429535   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:23.440386   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:23.928930   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:23.929182   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:23.940231   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:24.428874   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:24.428983   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:24.438694   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:24.927989   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:24.928159   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:24.939329   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:25.427536   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:25.427611   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:25.437043   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:25.437053   23082 api_server.go:165] Checking apiserver status ...
	I0222 21:35:25.437101   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:35:25.445644   23082 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:25.445656   23082 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0222 21:35:25.445664   23082 kubeadm.go:1120] stopping kube-system containers ...
	I0222 21:35:25.445737   23082 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:35:25.467300   23082 docker.go:456] Stopping containers: [38222c1e5a9a 81266e03597c 65f468d3946f c2d2774cd7b0 0e7c32bd2dbf 0b13ef595cf8 7cdc90a7a5cf 1ffbb666e1e8 8c0800486f8b f2d936ad845f fa59d83b3f7a 0880fe42d2ec 69ecae5fe3f3 6bac845b88b8 a83986c3e292 305e2bb9a838]
	I0222 21:35:25.467388   23082 ssh_runner.go:195] Run: docker stop 38222c1e5a9a 81266e03597c 65f468d3946f c2d2774cd7b0 0e7c32bd2dbf 0b13ef595cf8 7cdc90a7a5cf 1ffbb666e1e8 8c0800486f8b f2d936ad845f fa59d83b3f7a 0880fe42d2ec 69ecae5fe3f3 6bac845b88b8 a83986c3e292 305e2bb9a838
	I0222 21:35:25.488577   23082 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0222 21:35:25.499077   23082 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:35:25.506782   23082 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 23 05:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 23 05:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Feb 23 05:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 23 05:34 /etc/kubernetes/scheduler.conf
	
	I0222 21:35:25.506842   23082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0222 21:35:25.514397   23082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0222 21:35:25.521962   23082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0222 21:35:25.529375   23082 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:25.529423   23082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0222 21:35:25.536605   23082 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0222 21:35:25.544209   23082 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:35:25.544267   23082 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0222 21:35:25.551395   23082 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:35:25.558970   23082 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0222 21:35:25.558982   23082 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:35:25.615260   23082 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:35:26.270791   23082 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:35:26.406903   23082 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:35:26.476493   23082 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:35:26.582443   23082 api_server.go:51] waiting for apiserver process to appear ...
	I0222 21:35:26.582530   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:35:27.094910   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:35:27.594439   23082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:35:27.668889   23082 api_server.go:71] duration metric: took 1.086424153s to wait for apiserver process to appear ...
	I0222 21:35:27.668926   23082 api_server.go:87] waiting for apiserver healthz status ...
	I0222 21:35:27.668957   23082 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55372/healthz ...
	I0222 21:35:27.670233   23082 api_server.go:268] stopped: https://127.0.0.1:55372/healthz: Get "https://127.0.0.1:55372/healthz": EOF
	I0222 21:35:28.170342   23082 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55372/healthz ...
	I0222 21:35:30.665978   23082 api_server.go:278] https://127.0.0.1:55372/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0222 21:35:30.665999   23082 api_server.go:102] status: https://127.0.0.1:55372/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0222 21:35:30.670446   23082 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55372/healthz ...
	I0222 21:35:30.677224   23082 api_server.go:278] https://127.0.0.1:55372/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0222 21:35:30.677247   23082 api_server.go:102] status: https://127.0.0.1:55372/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0222 21:35:31.170906   23082 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55372/healthz ...
	I0222 21:35:31.178319   23082 api_server.go:278] https://127.0.0.1:55372/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0222 21:35:31.178334   23082 api_server.go:102] status: https://127.0.0.1:55372/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:35:31.670560   23082 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55372/healthz ...
	I0222 21:35:31.676840   23082 api_server.go:278] https://127.0.0.1:55372/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0222 21:35:31.676854   23082 api_server.go:102] status: https://127.0.0.1:55372/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:35:32.170611   23082 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55372/healthz ...
	I0222 21:35:32.176494   23082 api_server.go:278] https://127.0.0.1:55372/healthz returned 200:
	ok
	I0222 21:35:32.185391   23082 api_server.go:140] control plane version: v1.26.1
	I0222 21:35:32.185406   23082 api_server.go:130] duration metric: took 4.516384016s to wait for apiserver health ...
	I0222 21:35:32.185412   23082 cni.go:84] Creating CNI manager for ""
	I0222 21:35:32.185422   23082 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:35:32.209448   23082 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0222 21:35:32.230100   23082 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0222 21:35:32.239556   23082 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0222 21:35:32.281596   23082 system_pods.go:43] waiting for kube-system pods to appear ...
	I0222 21:35:32.290589   23082 system_pods.go:59] 8 kube-system pods found
	I0222 21:35:32.290606   23082 system_pods.go:61] "coredns-787d4945fb-k55xr" [585598a0-a16b-4242-a3ca-59234b5817c0] Running
	I0222 21:35:32.290610   23082 system_pods.go:61] "etcd-default-k8s-diff-port-783000" [119c84f7-9905-47f4-9cb0-f9ac84f23b09] Running
	I0222 21:35:32.290614   23082 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-783000" [6a90bbf2-f60b-4587-b4ec-111830ca8918] Running
	I0222 21:35:32.290622   23082 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-783000" [04fe32dd-d0be-4a72-897a-fff9a51fe114] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0222 21:35:32.290626   23082 system_pods.go:61] "kube-proxy-fv9ws" [98fa424e-f212-4ec3-bc4b-c12a4276a841] Running
	I0222 21:35:32.290630   23082 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-783000" [a622cc76-88c1-4c81-9a83-e21c4da7fdc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0222 21:35:32.290635   23082 system_pods.go:61] "metrics-server-7997d45854-vdq6f" [6dd8e401-dacb-44d4-8990-a3caeeb115ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0222 21:35:32.290639   23082 system_pods.go:61] "storage-provisioner" [1b3c2c08-e97d-44ba-ba51-c0e6708f30e5] Running
	I0222 21:35:32.290644   23082 system_pods.go:74] duration metric: took 9.023527ms to wait for pod list to return data ...
	I0222 21:35:32.290652   23082 node_conditions.go:102] verifying NodePressure condition ...
	I0222 21:35:32.294133   23082 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 21:35:32.294148   23082 node_conditions.go:123] node cpu capacity is 6
	I0222 21:35:32.294157   23082 node_conditions.go:105] duration metric: took 3.50057ms to run NodePressure ...
	I0222 21:35:32.294169   23082 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:35:32.785565   23082 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0222 21:35:32.792218   23082 kubeadm.go:784] kubelet initialised
	I0222 21:35:32.792233   23082 kubeadm.go:785] duration metric: took 6.651782ms waiting for restarted kubelet to initialise ...
	I0222 21:35:32.792240   23082 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0222 21:35:32.800336   23082 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-k55xr" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:32.808086   23082 pod_ready.go:92] pod "coredns-787d4945fb-k55xr" in "kube-system" namespace has status "Ready":"True"
	I0222 21:35:32.808101   23082 pod_ready.go:81] duration metric: took 7.750087ms waiting for pod "coredns-787d4945fb-k55xr" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:32.808109   23082 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-783000" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:32.871384   23082 pod_ready.go:92] pod "etcd-default-k8s-diff-port-783000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:35:32.871400   23082 pod_ready.go:81] duration metric: took 63.284572ms waiting for pod "etcd-default-k8s-diff-port-783000" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:32.871411   23082 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-783000" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:32.879584   23082 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-783000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:35:32.879598   23082 pod_ready.go:81] duration metric: took 8.179636ms waiting for pod "kube-apiserver-default-k8s-diff-port-783000" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:32.879612   23082 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-783000" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:34.894426   23082 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-783000" in "kube-system" namespace has status "Ready":"False"
	I0222 21:35:36.894566   23082 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-783000" in "kube-system" namespace has status "Ready":"False"
	I0222 21:35:39.395212   23082 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-783000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:35:39.395225   23082 pod_ready.go:81] duration metric: took 6.515488034s waiting for pod "kube-controller-manager-default-k8s-diff-port-783000" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:39.395232   23082 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fv9ws" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:39.399912   23082 pod_ready.go:92] pod "kube-proxy-fv9ws" in "kube-system" namespace has status "Ready":"True"
	I0222 21:35:39.399921   23082 pod_ready.go:81] duration metric: took 4.684042ms waiting for pod "kube-proxy-fv9ws" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:39.399927   23082 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-783000" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:41.410933   23082 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-783000" in "kube-system" namespace has status "Ready":"True"
	I0222 21:35:41.410946   23082 pod_ready.go:81] duration metric: took 2.010978436s waiting for pod "kube-scheduler-default-k8s-diff-port-783000" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:41.410953   23082 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace to be "Ready" ...
	I0222 21:35:43.421262   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:35:45.423941   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:35:47.426265   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:35:49.925326   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:35:52.423206   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:35:54.424187   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:35:56.923946   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:35:58.969275   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:01.424942   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:03.924166   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:06.425291   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:08.425584   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:10.925151   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:13.422598   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:15.424960   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:17.426254   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:19.924509   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:21.925152   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:23.926197   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:26.425249   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:28.925941   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:31.424646   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:33.924188   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:35.925608   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:37.925855   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:39.926248   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:42.424218   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:44.926386   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:47.423306   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:49.424756   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:51.425049   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:53.926093   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:56.425323   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:36:58.926355   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:01.425832   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:03.924916   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:05.925576   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:08.424750   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:10.925780   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:13.424143   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:15.424778   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:17.924898   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:19.925737   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:22.425122   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:24.426137   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:26.925183   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:28.927988   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:31.425693   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:33.926202   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:35.926730   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:38.424427   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:40.425393   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:42.923928   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:44.925183   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:46.926012   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:48.926289   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:51.425429   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:53.425601   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:55.924440   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:57.926754   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:37:59.927216   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:02.425740   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:04.925813   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:06.926557   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:08.927460   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:10.927598   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:13.425608   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:15.427477   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:17.926853   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:20.425592   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:22.426851   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:24.928192   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:27.426792   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:29.927542   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:32.427310   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:34.925231   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:36.926952   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:38.928050   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:41.426822   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:43.927148   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:45.928280   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:48.429501   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:50.925652   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:53.427271   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:55.925428   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:57.925691   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:38:59.926240   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:39:02.427652   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:39:04.927477   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:39:07.427094   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	I0222 21:39:09.428290   23082 pod_ready.go:102] pod "metrics-server-7997d45854-vdq6f" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 05:21:27 UTC, end at Thu 2023-02-23 05:39:14 UTC. --
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.360651674Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.361101847Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.361152173Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362113546Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362160116Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362184831Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362195048Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362224721Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362304394Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362362371Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362385432Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362403799Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362704899Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362772253Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362790217Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.363289477Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.370809711Z" level=info msg="Loading containers: start."
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.448406285Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.481550543Z" level=info msg="Loading containers: done."
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.490307846Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.490373476Z" level=info msg="Daemon has completed initialization"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.513109070Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.517014469Z" level=info msg="API listen on [::]:2376"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.523278664Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-02-23T05:39:17Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Feb23 05:11] hrtimer: interrupt took 1057500 ns
	
	* 
	* ==> kernel <==
	*  05:39:17 up  1:38,  0 users,  load average: 0.54, 0.75, 1.09
	Linux old-k8s-version-865000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 05:21:27 UTC, end at Thu 2023-02-23 05:39:17 UTC. --
	Feb 23 05:39:15 old-k8s-version-865000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 05:39:16 old-k8s-version-865000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Feb 23 05:39:16 old-k8s-version-865000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 05:39:16 old-k8s-version-865000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 05:39:16 old-k8s-version-865000 kubelet[24476]: I0223 05:39:16.261445   24476 server.go:410] Version: v1.16.0
	Feb 23 05:39:16 old-k8s-version-865000 kubelet[24476]: I0223 05:39:16.261717   24476 plugins.go:100] No cloud provider specified.
	Feb 23 05:39:16 old-k8s-version-865000 kubelet[24476]: I0223 05:39:16.261753   24476 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 05:39:16 old-k8s-version-865000 kubelet[24476]: I0223 05:39:16.263426   24476 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 05:39:16 old-k8s-version-865000 kubelet[24476]: W0223 05:39:16.264097   24476 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 05:39:16 old-k8s-version-865000 kubelet[24476]: W0223 05:39:16.264167   24476 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 05:39:16 old-k8s-version-865000 kubelet[24476]: F0223 05:39:16.264191   24476 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 05:39:16 old-k8s-version-865000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 05:39:16 old-k8s-version-865000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 05:39:16 old-k8s-version-865000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Feb 23 05:39:16 old-k8s-version-865000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 05:39:16 old-k8s-version-865000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 05:39:17 old-k8s-version-865000 kubelet[24492]: I0223 05:39:17.014093   24492 server.go:410] Version: v1.16.0
	Feb 23 05:39:17 old-k8s-version-865000 kubelet[24492]: I0223 05:39:17.014423   24492 plugins.go:100] No cloud provider specified.
	Feb 23 05:39:17 old-k8s-version-865000 kubelet[24492]: I0223 05:39:17.014459   24492 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 05:39:17 old-k8s-version-865000 kubelet[24492]: I0223 05:39:17.016309   24492 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 05:39:17 old-k8s-version-865000 kubelet[24492]: W0223 05:39:17.016982   24492 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 05:39:17 old-k8s-version-865000 kubelet[24492]: W0223 05:39:17.017096   24492 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 05:39:17 old-k8s-version-865000 kubelet[24492]: F0223 05:39:17.017149   24492 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 05:39:17 old-k8s-version-865000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 05:39:17 old-k8s-version-865000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 21:39:17.274621   23422 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 2 (409.803965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-865000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:39:43.225007    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 21:39:45.277478    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:40:03.297583    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:40:44.225364    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:40:54.419261    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:40:59.664979    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:41:26.346022    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:41:42.632727    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:42:17.813273    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:42:19.998064    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:42:34.272277    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:43:11.989211    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:43:40.879725    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:43:52.172170    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:44:08.263036    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:44:47.407223    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:47.412472    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:47.422572    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:47.442844    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:47.484920    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:47.565423    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:47.726334    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:48.046484    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:48.686715    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:49.967025    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:52.527268    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
E0222 21:44:57.647786    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:45:03.300786    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 21:45:07.890188    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54726/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0222 21:45:23.050652    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:45:28.372413    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:45:44.230341    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:45:54.423952    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:45:59.670255    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:46:09.335521    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:46:42.638119    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:46:55.219263    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:47:17.817583    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:47:20.004185    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:47:31.257530    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/default-k8s-diff-port-783000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:47:34.277464    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0222 21:48:11.994569    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 2 (395.718735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-865000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-865000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-865000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (803ns)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-865000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-865000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-865000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c",
	        "Created": "2023-02-23T05:15:31.417090555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T05:21:27.411292149Z",
	            "FinishedAt": "2023-02-23T05:21:24.519545355Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hostname",
	        "HostsPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/hosts",
	        "LogPath": "/var/lib/docker/containers/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c/106a8b1953836258f3265e97e2547029eeec326da49bfa086827f0a086a7096c-json.log",
	        "Name": "/old-k8s-version-865000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-865000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-865000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93-init/diff:/var/lib/docker/overlay2/d735a905256a842f090e2c879afc9d92376c839b4676aab2d392ae501e606232/diff:/var/lib/docker/overlay2/d1f2f3f6ac23ac49767fdc30d9c98225ca88bf64cd567e0d86d56a9233fd763d/diff:/var/lib/docker/overlay2/f0fa698605bd05ca65a330d4275608edcd970cd76859d3cb8354bb4254d0f08b/diff:/var/lib/docker/overlay2/63febb00ae34d33919004ab9942589dece0f8c645f1d216ccb4299944904202d/diff:/var/lib/docker/overlay2/c3b69572a9377c568e6ba6262a57fed7babe20b40ee8de365575e7f5edb8a33c/diff:/var/lib/docker/overlay2/94ef868439834d58280ec26aeb7d1549bc4f2eed9a9b7a214aaadfe9801d8638/diff:/var/lib/docker/overlay2/b13946ad442fea4a8d40bdbfe4c5d25c00fd8943577be95102c710f9a16278f3/diff:/var/lib/docker/overlay2/e9393d1f48ae5ce65f214ef58518cffd0dcae338efd05a200bc2a9c4952a7e11/diff:/var/lib/docker/overlay2/ee489b944eee182f771ca641762318eca8c44e5315622e5003d7215a77926c43/diff:/var/lib/docker/overlay2/7fc06d
6bf7ccc4b1c6af5a9aef949eb7c79e7f19568861f2b3d145ecf82f892c/diff:/var/lib/docker/overlay2/6551f474d7a059dd528cd8a102d8d3daf9f787cd3867d4cf0a8ecbe3137845f7/diff:/var/lib/docker/overlay2/16cb6b8eb7f92e97399c2b93c8436919e1224e15bf1a6c93349763abd15dd3d0/diff:/var/lib/docker/overlay2/aec62818fca9efa0d3d657164ce0265a5b62d0895cbf6df521724fe91cec3edb/diff:/var/lib/docker/overlay2/3f69fa56b42132fa5af6a30509a1490ac967ab0bb13b085d9e02158a27a1d86c/diff:/var/lib/docker/overlay2/8d1cebecde0fae7654d090a1091c9b2390b0b7c9d82e6273c294842aab59de34/diff:/var/lib/docker/overlay2/158a459a2e1f3458d0019dd0b14b04015255b1ed87f965306282f7b3e70a38fc/diff:/var/lib/docker/overlay2/a56ff1809b9696eaecf1befd98d45d0991a44a736550ac02d8d6118644da603d/diff:/var/lib/docker/overlay2/8c96c8d23c323c83538e80ac561282484d79fe84e63ad053ae788e86f87c1ef4/diff:/var/lib/docker/overlay2/ec09433094ead97c6aaea064f2f1e48b8307ae5816c5d97df91cb7bd05fec68f/diff:/var/lib/docker/overlay2/cd9fc5eaeb18492d8b784c4c8fc92a8fa34551a0910b052700985d2a9380a4dd/diff:/var/lib/d
ocker/overlay2/04b42e69265100106da7547a97dd3662e94986998055ab81e820f8db49dc2971/diff:/var/lib/docker/overlay2/5db9f3630a76a8469b949dd07eb98cfc6237154c800f8f3aca8ccaf39f05448f/diff:/var/lib/docker/overlay2/2d16c0b3e1ed51f470f9c35de90354910962c318d531641b26e7bb615367d319/diff:/var/lib/docker/overlay2/8901b538fcccec8e0f6b3fd323c372021b9ec98d0d87e32302bcd1081f43379a/diff:/var/lib/docker/overlay2/da09afbc05fd27e3beb8c85c2097a8c2472689b52ee4998b494df79026a685bd/diff:/var/lib/docker/overlay2/8588968b29feb5e06cc9a0c784934eceb4ac9ba4e418b6137a1dd4d21c1caaa2/diff:/var/lib/docker/overlay2/7f2af1b3ff78cc5bbc7bba935d67e913a5f9e678f66467e4d29ebbba94ada290/diff:/var/lib/docker/overlay2/3705f200b0512d179b1d47648fe9de6303de6edb16366b71147debcd908852cc/diff:/var/lib/docker/overlay2/a65b125a93208a4dd9c0c32ba885c17b95d8ca095b1e3663e47ef3d40eb46c4a/diff:/var/lib/docker/overlay2/699456f0b88dd59d3c858cb5b72c591e6c9548ad5424c399cde92ac6fbb62c1f/diff:/var/lib/docker/overlay2/d68cc821b6f53d22b3e4278c433e3253b61e11e323942f292495520f5c1
56d09/diff:/var/lib/docker/overlay2/1160486e9945f24f96fc29bdbc90043530e8a836438e8ac2f15584c126e7becf/diff:/var/lib/docker/overlay2/ade2a355e817a502244b9949538fab6a121e5470090805f56cedcc1d326eaa50/diff:/var/lib/docker/overlay2/b9610e93be96ad7fa3449bc85812a48b31f473d4f9665177b09344c0da63676a/diff:/var/lib/docker/overlay2/a84b42adc3239ead9ad6efb1b79d87c7a425b9c699f8a19c79624219e4993a4d/diff:/var/lib/docker/overlay2/e95299454110b8c49ed959b2de345e2030d1ab766008f754b0f765e1dfdd2d83/diff:/var/lib/docker/overlay2/4ae785a0642ee329a8c37b6b14982d4cf62c236dfc1924baaf06121c717bc7d7/diff:/var/lib/docker/overlay2/d622f6e4652a4f47b54d0c94fc2f898039074d50181b1c295c171f465f6df163/diff:/var/lib/docker/overlay2/250d59aa3acb4cfd98726e26ac853da8694439cd310db826ac7202b81c1db23a/diff:/var/lib/docker/overlay2/92d316e8010485b8001e0b4afb059d38754579ceef0552bb4e8d9185fd1bff67/diff:/var/lib/docker/overlay2/e1e3f48218f59ff3e5116128a23b26c974f5c70a446819c352249cb546476eb2/diff:/var/lib/docker/overlay2/77a9ef264190dd4d87402d2c9ac7cb20d76097
ff77087beff536b2cd4b965b31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a658bced79351bc8f8d55536fa19a842f3cb91ab79c29dce7c641a4b21b2aa93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-865000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-865000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-865000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-865000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "351da03ebb5828b9ae09ef98a1a92ca983c146b1286e410710fdcd0e8b997b44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54722"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54723"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54724"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54725"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54726"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/351da03ebb58",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-865000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "106a8b195383",
	                        "old-k8s-version-865000"
	                    ],
	                    "NetworkID": "947893b68cb410e9e5982aa5b8afeae1844c1ff30155168ea70efca5bffdb638",
	                    "EndpointID": "514d220541551db5b6e5df3d10fa1937f8cfad31f95838367761a5c304074af5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 2 (401.811208ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-865000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-865000 logs -n 25: (3.409249562s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-677000                                | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p embed-certs-677000                                | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-677000                                | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	| delete  | -p embed-certs-677000                                | embed-certs-677000           | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	| delete  | -p                                                   | disable-driver-mounts-986000 | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:33 PST |
	|         | disable-driver-mounts-986000                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:33 PST | 22 Feb 23 21:34 PST |
	|         | default-k8s-diff-port-783000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:34 PST | 22 Feb 23 21:34 PST |
	|         | default-k8s-diff-port-783000                         |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:34 PST | 22 Feb 23 21:35 PST |
	|         | default-k8s-diff-port-783000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-783000     | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:35 PST | 22 Feb 23 21:35 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:35 PST | 22 Feb 23 21:44 PST |
	|         | default-k8s-diff-port-783000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:44 PST | 22 Feb 23 21:44 PST |
	|         | default-k8s-diff-port-783000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:44 PST | 22 Feb 23 21:44 PST |
	|         | default-k8s-diff-port-783000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:44 PST | 22 Feb 23 21:44 PST |
	|         | default-k8s-diff-port-783000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:44 PST | 22 Feb 23 21:44 PST |
	|         | default-k8s-diff-port-783000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-783000 | jenkins | v1.29.0 | 22 Feb 23 21:44 PST | 22 Feb 23 21:44 PST |
	|         | default-k8s-diff-port-783000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-150000 --memory=2200 --alsologtostderr | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:44 PST | 22 Feb 23 21:45 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-150000           | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:45 PST | 22 Feb 23 21:45 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-150000                                 | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:45 PST | 22 Feb 23 21:45 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-150000                | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:45 PST | 22 Feb 23 21:45 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p newest-cni-150000 --memory=2200 --alsologtostderr | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:45 PST | 22 Feb 23 21:46 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-150000 sudo                            | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:46 PST | 22 Feb 23 21:46 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p newest-cni-150000                                 | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:46 PST | 22 Feb 23 21:46 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p newest-cni-150000                                 | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:46 PST | 22 Feb 23 21:46 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-150000                                 | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:46 PST | 22 Feb 23 21:46 PST |
	| delete  | -p newest-cni-150000                                 | newest-cni-150000            | jenkins | v1.29.0 | 22 Feb 23 21:46 PST | 22 Feb 23 21:46 PST |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/22 21:45:36
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0222 21:45:36.331155   24140 out.go:296] Setting OutFile to fd 1 ...
	I0222 21:45:36.331337   24140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:45:36.331342   24140 out.go:309] Setting ErrFile to fd 2...
	I0222 21:45:36.331346   24140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 21:45:36.331452   24140 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 21:45:36.332893   24140 out.go:303] Setting JSON to false
	I0222 21:45:36.351339   24140 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6311,"bootTime":1677124825,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 21:45:36.351429   24140 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 21:45:36.373333   24140 out.go:177] * [newest-cni-150000] minikube v1.29.0 on Darwin 13.2
	I0222 21:45:36.416069   24140 notify.go:220] Checking for updates...
	I0222 21:45:36.437982   24140 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 21:45:36.459014   24140 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:45:36.479874   24140 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 21:45:36.501054   24140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 21:45:36.522093   24140 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 21:45:36.542865   24140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 21:45:36.564955   24140 config.go:182] Loaded profile config "newest-cni-150000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:45:36.565691   24140 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 21:45:36.627896   24140 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 21:45:36.628007   24140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:45:36.816324   24140 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:45:36.722557561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:45:36.838391   24140 out.go:177] * Using the docker driver based on existing profile
	I0222 21:45:36.860115   24140 start.go:296] selected driver: docker
	I0222 21:45:36.860150   24140 start.go:857] validating driver "docker" against &{Name:newest-cni-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-150000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:45:36.860316   24140 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 21:45:36.864194   24140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 21:45:37.006508   24140 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 05:45:36.914405232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 21:45:37.006671   24140 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0222 21:45:37.006688   24140 cni.go:84] Creating CNI manager for ""
	I0222 21:45:37.006699   24140 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:45:37.006708   24140 start_flags.go:319] config:
	{Name:newest-cni-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-150000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:45:37.050179   24140 out.go:177] * Starting control plane node newest-cni-150000 in cluster newest-cni-150000
	I0222 21:45:37.071125   24140 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 21:45:37.093194   24140 out.go:177] * Pulling base image ...
	I0222 21:45:37.136938   24140 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 21:45:37.137026   24140 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 21:45:37.137049   24140 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0222 21:45:37.137072   24140 cache.go:57] Caching tarball of preloaded images
	I0222 21:45:37.137294   24140 preload.go:174] Found /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0222 21:45:37.137315   24140 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0222 21:45:37.138290   24140 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000/config.json ...
	I0222 21:45:37.194331   24140 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0222 21:45:37.194364   24140 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0222 21:45:37.194384   24140 cache.go:193] Successfully downloaded all kic artifacts
	I0222 21:45:37.194424   24140 start.go:364] acquiring machines lock for newest-cni-150000: {Name:mk84944c6307399c94d0373d1d5e4a278f073f01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0222 21:45:37.194521   24140 start.go:368] acquired machines lock for "newest-cni-150000" in 79.081µs
	I0222 21:45:37.194549   24140 start.go:96] Skipping create...Using existing machine configuration
	I0222 21:45:37.194557   24140 fix.go:55] fixHost starting: 
	I0222 21:45:37.194868   24140 cli_runner.go:164] Run: docker container inspect newest-cni-150000 --format={{.State.Status}}
	I0222 21:45:37.256753   24140 fix.go:103] recreateIfNeeded on newest-cni-150000: state=Stopped err=<nil>
	W0222 21:45:37.256784   24140 fix.go:129] unexpected machine state, will restart: <nil>
	I0222 21:45:37.278730   24140 out.go:177] * Restarting existing docker container for "newest-cni-150000" ...
	I0222 21:45:37.300603   24140 cli_runner.go:164] Run: docker start newest-cni-150000
	I0222 21:45:37.633993   24140 cli_runner.go:164] Run: docker container inspect newest-cni-150000 --format={{.State.Status}}
	I0222 21:45:37.695056   24140 kic.go:426] container "newest-cni-150000" state is running.
	I0222 21:45:37.695627   24140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-150000
	I0222 21:45:37.761745   24140 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000/config.json ...
	I0222 21:45:37.762397   24140 machine.go:88] provisioning docker machine ...
	I0222 21:45:37.762432   24140 ubuntu.go:169] provisioning hostname "newest-cni-150000"
	I0222 21:45:37.762550   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:37.835322   24140 main.go:141] libmachine: Using SSH client type: native
	I0222 21:45:37.835837   24140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 56130 <nil> <nil>}
	I0222 21:45:37.835864   24140 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-150000 && echo "newest-cni-150000" | sudo tee /etc/hostname
	I0222 21:45:37.987406   24140 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-150000
	
	I0222 21:45:37.987509   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:38.051530   24140 main.go:141] libmachine: Using SSH client type: native
	I0222 21:45:38.051879   24140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 56130 <nil> <nil>}
	I0222 21:45:38.051893   24140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-150000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-150000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-150000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0222 21:45:38.187982   24140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:45:38.188007   24140 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
	I0222 21:45:38.188025   24140 ubuntu.go:177] setting up certificates
	I0222 21:45:38.188033   24140 provision.go:83] configureAuth start
	I0222 21:45:38.188119   24140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-150000
	I0222 21:45:38.246956   24140 provision.go:138] copyHostCerts
	I0222 21:45:38.247057   24140 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
	I0222 21:45:38.247068   24140 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
	I0222 21:45:38.247174   24140 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
	I0222 21:45:38.247386   24140 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
	I0222 21:45:38.247394   24140 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
	I0222 21:45:38.247451   24140 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
	I0222 21:45:38.247594   24140 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
	I0222 21:45:38.247606   24140 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
	I0222 21:45:38.247664   24140 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
	I0222 21:45:38.247788   24140 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.newest-cni-150000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-150000]
	I0222 21:45:38.434302   24140 provision.go:172] copyRemoteCerts
	I0222 21:45:38.434367   24140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0222 21:45:38.434418   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:38.491699   24140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56130 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/newest-cni-150000/id_rsa Username:docker}
	I0222 21:45:38.586008   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0222 21:45:38.605020   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0222 21:45:38.624097   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0222 21:45:38.641635   24140 provision.go:86] duration metric: configureAuth took 453.561204ms
	I0222 21:45:38.641648   24140 ubuntu.go:193] setting minikube options for container-runtime
	I0222 21:45:38.642008   24140 config.go:182] Loaded profile config "newest-cni-150000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:45:38.642106   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:38.702860   24140 main.go:141] libmachine: Using SSH client type: native
	I0222 21:45:38.703237   24140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 56130 <nil> <nil>}
	I0222 21:45:38.703246   24140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0222 21:45:38.837271   24140 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0222 21:45:38.837292   24140 ubuntu.go:71] root file system type: overlay
	I0222 21:45:38.837414   24140 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0222 21:45:38.837499   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:38.896545   24140 main.go:141] libmachine: Using SSH client type: native
	I0222 21:45:38.896900   24140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 56130 <nil> <nil>}
	I0222 21:45:38.896950   24140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0222 21:45:39.039612   24140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0222 21:45:39.039702   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:39.099153   24140 main.go:141] libmachine: Using SSH client type: native
	I0222 21:45:39.099540   24140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 56130 <nil> <nil>}
	I0222 21:45:39.099553   24140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0222 21:45:39.240060   24140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0222 21:45:39.240077   24140 machine.go:91] provisioned docker machine in 1.477643055s
	I0222 21:45:39.240086   24140 start.go:300] post-start starting for "newest-cni-150000" (driver="docker")
	I0222 21:45:39.240092   24140 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0222 21:45:39.240177   24140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0222 21:45:39.240240   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:39.299091   24140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56130 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/newest-cni-150000/id_rsa Username:docker}
	I0222 21:45:39.396139   24140 ssh_runner.go:195] Run: cat /etc/os-release
	I0222 21:45:39.399794   24140 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0222 21:45:39.399814   24140 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0222 21:45:39.399821   24140 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0222 21:45:39.399826   24140 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0222 21:45:39.399833   24140 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
	I0222 21:45:39.399919   24140 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
	I0222 21:45:39.400077   24140 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
	I0222 21:45:39.400252   24140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0222 21:45:39.407578   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:45:39.424586   24140 start.go:303] post-start completed in 184.48539ms
	I0222 21:45:39.424678   24140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 21:45:39.424729   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:39.487453   24140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56130 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/newest-cni-150000/id_rsa Username:docker}
	I0222 21:45:39.579999   24140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0222 21:45:39.584751   24140 fix.go:57] fixHost completed within 2.390143243s
	I0222 21:45:39.584770   24140 start.go:83] releasing machines lock for "newest-cni-150000", held for 2.390198605s
	I0222 21:45:39.584850   24140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-150000
	I0222 21:45:39.643169   24140 ssh_runner.go:195] Run: cat /version.json
	I0222 21:45:39.643193   24140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0222 21:45:39.643231   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:39.643280   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:39.708336   24140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56130 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/newest-cni-150000/id_rsa Username:docker}
	I0222 21:45:39.708459   24140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56130 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/newest-cni-150000/id_rsa Username:docker}
	I0222 21:45:39.859343   24140 ssh_runner.go:195] Run: systemctl --version
	I0222 21:45:39.864242   24140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0222 21:45:39.869653   24140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0222 21:45:39.885743   24140 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0222 21:45:39.885813   24140 ssh_runner.go:195] Run: which cri-dockerd
	I0222 21:45:39.889913   24140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0222 21:45:39.897277   24140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0222 21:45:39.910455   24140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0222 21:45:39.918264   24140 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0222 21:45:39.918280   24140 start.go:485] detecting cgroup driver to use...
	I0222 21:45:39.918291   24140 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:45:39.918419   24140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:45:39.932311   24140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0222 21:45:39.941025   24140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0222 21:45:39.949729   24140 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0222 21:45:39.949781   24140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0222 21:45:39.958276   24140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:45:39.966912   24140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0222 21:45:39.975389   24140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0222 21:45:39.984315   24140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0222 21:45:39.992533   24140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0222 21:45:40.001246   24140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0222 21:45:40.008634   24140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0222 21:45:40.015920   24140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:45:40.084525   24140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0222 21:45:40.156010   24140 start.go:485] detecting cgroup driver to use...
	I0222 21:45:40.156029   24140 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0222 21:45:40.156096   24140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0222 21:45:40.167518   24140 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0222 21:45:40.167601   24140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0222 21:45:40.178342   24140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0222 21:45:40.193128   24140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0222 21:45:40.294024   24140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0222 21:45:40.393213   24140 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0222 21:45:40.393238   24140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0222 21:45:40.407480   24140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:45:40.504477   24140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0222 21:45:40.779331   24140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:45:40.855193   24140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0222 21:45:40.926344   24140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0222 21:45:40.998515   24140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0222 21:45:41.068465   24140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0222 21:45:41.081543   24140 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0222 21:45:41.081663   24140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0222 21:45:41.086740   24140 start.go:553] Will wait 60s for crictl version
	I0222 21:45:41.086792   24140 ssh_runner.go:195] Run: which crictl
	I0222 21:45:41.090530   24140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0222 21:45:41.192578   24140 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0222 21:45:41.192673   24140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:45:41.217670   24140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0222 21:45:41.290649   24140 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0222 21:45:41.290879   24140 cli_runner.go:164] Run: docker exec -t newest-cni-150000 dig +short host.docker.internal
	I0222 21:45:41.421141   24140 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0222 21:45:41.421253   24140 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0222 21:45:41.425855   24140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:45:41.436095   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:41.519121   24140 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0222 21:45:41.540615   24140 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 21:45:41.540783   24140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:45:41.563522   24140 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0222 21:45:41.563539   24140 docker.go:560] Images already preloaded, skipping extraction
	I0222 21:45:41.563616   24140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0222 21:45:41.583449   24140 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0222 21:45:41.583466   24140 cache_images.go:84] Images are preloaded, skipping loading
	I0222 21:45:41.583551   24140 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0222 21:45:41.609240   24140 cni.go:84] Creating CNI manager for ""
	I0222 21:45:41.609259   24140 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:45:41.609277   24140 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0222 21:45:41.609295   24140 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-150000 NodeName:newest-cni-150000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0222 21:45:41.609421   24140 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-150000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0222 21:45:41.609507   24140 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-150000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:newest-cni-150000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0222 21:45:41.609584   24140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0222 21:45:41.617739   24140 binaries.go:44] Found k8s binaries, skipping transfer
	I0222 21:45:41.617796   24140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0222 21:45:41.625159   24140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0222 21:45:41.638520   24140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0222 21:45:41.652231   24140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0222 21:45:41.665623   24140 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0222 21:45:41.670078   24140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0222 21:45:41.680471   24140 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000 for IP: 192.168.67.2
	I0222 21:45:41.680488   24140 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:45:41.680672   24140 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
	I0222 21:45:41.680721   24140 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
	I0222 21:45:41.680814   24140 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000/client.key
	I0222 21:45:41.680889   24140 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000/apiserver.key.c7fa3a9e
	I0222 21:45:41.680941   24140 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000/proxy-client.key
	I0222 21:45:41.681138   24140 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
	W0222 21:45:41.681174   24140 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
	I0222 21:45:41.681184   24140 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
	I0222 21:45:41.681224   24140 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
	I0222 21:45:41.681257   24140 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
	I0222 21:45:41.681290   24140 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
	I0222 21:45:41.681361   24140 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
	I0222 21:45:41.681946   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0222 21:45:41.700712   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0222 21:45:41.720239   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0222 21:45:41.740718   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/newest-cni-150000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0222 21:45:41.758986   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0222 21:45:41.777727   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0222 21:45:41.795550   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0222 21:45:41.813318   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0222 21:45:41.831228   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
	I0222 21:45:41.849134   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0222 21:45:41.867002   24140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
	I0222 21:45:41.884376   24140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0222 21:45:41.897457   24140 ssh_runner.go:195] Run: openssl version
	I0222 21:45:41.903380   24140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
	I0222 21:45:41.911760   24140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
	I0222 21:45:41.915793   24140 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
	I0222 21:45:41.915864   24140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
	I0222 21:45:41.921498   24140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
	I0222 21:45:41.929097   24140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0222 21:45:41.937731   24140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:45:41.941839   24140 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:45:41.941883   24140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0222 21:45:41.947445   24140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0222 21:45:41.955128   24140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
	I0222 21:45:41.963447   24140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
	I0222 21:45:41.967992   24140 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
	I0222 21:45:41.968040   24140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
	I0222 21:45:41.973811   24140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
	I0222 21:45:41.981661   24140 kubeadm.go:401] StartCluster: {Name:newest-cni-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-150000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 21:45:41.981775   24140 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:45:42.004827   24140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0222 21:45:42.012936   24140 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0222 21:45:42.012951   24140 kubeadm.go:633] restartCluster start
	I0222 21:45:42.013002   24140 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0222 21:45:42.020281   24140 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:42.020355   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:42.082467   24140 kubeconfig.go:135] verify returned: extract IP: "newest-cni-150000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:45:42.082642   24140 kubeconfig.go:146] "newest-cni-150000" context is missing from /Users/jenkins/minikube-integration/15909-2664/kubeconfig - will repair!
	I0222 21:45:42.083015   24140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:45:42.084622   24140 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0222 21:45:42.092894   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:42.092991   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:42.102357   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:42.602901   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:42.603068   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:42.614201   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:43.102522   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:43.102650   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:43.113106   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:43.602718   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:43.602851   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:43.613700   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:44.102520   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:44.102665   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:44.113249   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:44.604552   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:44.604717   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:44.615126   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:45.103387   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:45.103509   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:45.114765   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:45.604593   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:45.604720   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:45.616116   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:46.104560   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:46.104735   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:46.116014   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:46.603577   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:46.603707   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:46.614724   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:47.104577   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:47.104787   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:47.115344   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:47.602642   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:47.602831   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:47.613857   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:48.103424   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:48.103534   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:48.114613   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:48.604537   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:48.604652   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:48.614752   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:49.103168   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:49.103334   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:49.114695   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:49.603300   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:49.603460   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:49.614342   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:50.102786   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:50.102932   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:50.113851   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:50.604632   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:50.604853   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:50.615814   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:51.102800   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:51.102922   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:51.113749   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:51.604662   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:51.604859   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:51.615882   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:52.104701   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:52.104901   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:52.115979   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:52.115988   24140 api_server.go:165] Checking apiserver status ...
	I0222 21:45:52.116043   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0222 21:45:52.124373   24140 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:52.124384   24140 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0222 21:45:52.124392   24140 kubeadm.go:1120] stopping kube-system containers ...
	I0222 21:45:52.124467   24140 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0222 21:45:52.146250   24140 docker.go:456] Stopping containers: [3d9e072e71a3 eec774ed3b7c 81397e6fdcce 28eedf991bc2 4eac9352de9c 31794a160066 88959e3b8233 216057bdfee9 f8cb6dbee9c2 fa713f15d89a 5a9b06084b46 b99c29ada3d0 f2f1b8015665 d0b774bb51fb 3b49e583ae0b 77b86bd802fd]
	I0222 21:45:52.146335   24140 ssh_runner.go:195] Run: docker stop 3d9e072e71a3 eec774ed3b7c 81397e6fdcce 28eedf991bc2 4eac9352de9c 31794a160066 88959e3b8233 216057bdfee9 f8cb6dbee9c2 fa713f15d89a 5a9b06084b46 b99c29ada3d0 f2f1b8015665 d0b774bb51fb 3b49e583ae0b 77b86bd802fd
	I0222 21:45:52.166092   24140 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0222 21:45:52.176701   24140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0222 21:45:52.184606   24140 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 23 05:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 23 05:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 23 05:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 23 05:44 /etc/kubernetes/scheduler.conf
	
	I0222 21:45:52.184665   24140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0222 21:45:52.192330   24140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0222 21:45:52.200090   24140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0222 21:45:52.208405   24140 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:52.208465   24140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0222 21:45:52.217011   24140 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0222 21:45:52.225386   24140 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0222 21:45:52.225450   24140 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0222 21:45:52.233633   24140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0222 21:45:52.242099   24140 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0222 21:45:52.242115   24140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:45:52.301478   24140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:45:53.046305   24140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:45:53.186066   24140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:45:53.286976   24140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:45:53.395183   24140 api_server.go:51] waiting for apiserver process to appear ...
	I0222 21:45:53.395252   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:45:53.907875   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:45:54.407539   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:45:54.490389   24140 api_server.go:71] duration metric: took 1.095185547s to wait for apiserver process to appear ...
	I0222 21:45:54.490416   24140 api_server.go:87] waiting for apiserver healthz status ...
	I0222 21:45:54.490434   24140 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56129/healthz ...
	I0222 21:45:54.492424   24140 api_server.go:268] stopped: https://127.0.0.1:56129/healthz: Get "https://127.0.0.1:56129/healthz": EOF
	I0222 21:45:54.992893   24140 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56129/healthz ...
	I0222 21:45:56.792668   24140 api_server.go:278] https://127.0.0.1:56129/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0222 21:45:56.792688   24140 api_server.go:102] status: https://127.0.0.1:56129/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0222 21:45:56.992632   24140 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56129/healthz ...
	I0222 21:45:56.998277   24140 api_server.go:278] https://127.0.0.1:56129/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0222 21:45:56.998293   24140 api_server.go:102] status: https://127.0.0.1:56129/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:45:57.492673   24140 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56129/healthz ...
	I0222 21:45:57.499190   24140 api_server.go:278] https://127.0.0.1:56129/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0222 21:45:57.499206   24140 api_server.go:102] status: https://127.0.0.1:56129/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0222 21:45:57.992634   24140 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56129/healthz ...
	I0222 21:45:57.998216   24140 api_server.go:278] https://127.0.0.1:56129/healthz returned 200:
	ok
	I0222 21:45:58.005427   24140 api_server.go:140] control plane version: v1.26.1
	I0222 21:45:58.005438   24140 api_server.go:130] duration metric: took 3.514956051s to wait for apiserver health ...
	I0222 21:45:58.005443   24140 cni.go:84] Creating CNI manager for ""
	I0222 21:45:58.005453   24140 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 21:45:58.043726   24140 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0222 21:45:58.081454   24140 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0222 21:45:58.091674   24140 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0222 21:45:58.105165   24140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0222 21:45:58.113515   24140 system_pods.go:59] 9 kube-system pods found
	I0222 21:45:58.113535   24140 system_pods.go:61] "coredns-787d4945fb-6bkjk" [82609059-08bc-4f4d-9caa-f9f90e9ce479] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0222 21:45:58.113541   24140 system_pods.go:61] "coredns-787d4945fb-vdk88" [7561c10d-40b9-4c2f-bd92-ead3653253b3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0222 21:45:58.113547   24140 system_pods.go:61] "etcd-newest-cni-150000" [ca474e26-ceae-4d8a-b14a-73692130f540] Running
	I0222 21:45:58.113551   24140 system_pods.go:61] "kube-apiserver-newest-cni-150000" [0a00499c-83fc-4e98-9f05-316e0714d496] Running
	I0222 21:45:58.113555   24140 system_pods.go:61] "kube-controller-manager-newest-cni-150000" [627b597b-67c5-4c3d-b415-f5c03476dc17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0222 21:45:58.113562   24140 system_pods.go:61] "kube-proxy-2bd7b" [e4c86cb7-8397-4b1e-bb0b-f3d4ad7732ee] Running
	I0222 21:45:58.113566   24140 system_pods.go:61] "kube-scheduler-newest-cni-150000" [0ee7d122-4145-412b-8174-2a0f67bc241d] Running
	I0222 21:45:58.113570   24140 system_pods.go:61] "metrics-server-7997d45854-plccr" [899cf5d0-d949-4a3c-af3e-4a32b8a99803] Pending
	I0222 21:45:58.113574   24140 system_pods.go:61] "storage-provisioner" [b258e92d-3ba6-4d75-b47f-99f80c641666] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0222 21:45:58.113579   24140 system_pods.go:74] duration metric: took 8.402408ms to wait for pod list to return data ...
	I0222 21:45:58.113584   24140 node_conditions.go:102] verifying NodePressure condition ...
	I0222 21:45:58.117021   24140 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 21:45:58.117036   24140 node_conditions.go:123] node cpu capacity is 6
	I0222 21:45:58.117046   24140 node_conditions.go:105] duration metric: took 3.457522ms to run NodePressure ...
	I0222 21:45:58.117065   24140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0222 21:45:58.302017   24140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0222 21:45:58.310162   24140 ops.go:34] apiserver oom_adj: -16
	I0222 21:45:58.310179   24140 kubeadm.go:637] restartCluster took 16.296940839s
	I0222 21:45:58.310187   24140 kubeadm.go:403] StartCluster complete in 16.328250483s
	I0222 21:45:58.310201   24140 settings.go:142] acquiring lock: {Name:mk09b0ae3061a5d1df7256089aea48f15d65cbf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:45:58.310289   24140 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 21:45:58.310915   24140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/kubeconfig: {Name:mk83a1b8b942e240211e76ef0ac6b257474202a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 21:45:58.311156   24140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0222 21:45:58.311189   24140 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0222 21:45:58.311267   24140 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-150000"
	I0222 21:45:58.311286   24140 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-150000"
	W0222 21:45:58.311293   24140 addons.go:236] addon storage-provisioner should already be in state true
	I0222 21:45:58.311293   24140 addons.go:65] Setting dashboard=true in profile "newest-cni-150000"
	I0222 21:45:58.311315   24140 addons.go:227] Setting addon dashboard=true in "newest-cni-150000"
	I0222 21:45:58.311323   24140 addons.go:65] Setting default-storageclass=true in profile "newest-cni-150000"
	W0222 21:45:58.311331   24140 addons.go:236] addon dashboard should already be in state true
	I0222 21:45:58.311343   24140 host.go:66] Checking if "newest-cni-150000" exists ...
	I0222 21:45:58.311351   24140 config.go:182] Loaded profile config "newest-cni-150000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 21:45:58.311354   24140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-150000"
	I0222 21:45:58.311363   24140 host.go:66] Checking if "newest-cni-150000" exists ...
	I0222 21:45:58.311366   24140 addons.go:65] Setting metrics-server=true in profile "newest-cni-150000"
	I0222 21:45:58.311415   24140 addons.go:227] Setting addon metrics-server=true in "newest-cni-150000"
	W0222 21:45:58.311431   24140 addons.go:236] addon metrics-server should already be in state true
	I0222 21:45:58.311483   24140 host.go:66] Checking if "newest-cni-150000" exists ...
	I0222 21:45:58.311767   24140 cli_runner.go:164] Run: docker container inspect newest-cni-150000 --format={{.State.Status}}
	I0222 21:45:58.311790   24140 cli_runner.go:164] Run: docker container inspect newest-cni-150000 --format={{.State.Status}}
	I0222 21:45:58.311833   24140 cli_runner.go:164] Run: docker container inspect newest-cni-150000 --format={{.State.Status}}
	I0222 21:45:58.311938   24140 cli_runner.go:164] Run: docker container inspect newest-cni-150000 --format={{.State.Status}}
	I0222 21:45:58.323144   24140 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-150000" context rescaled to 1 replicas
	I0222 21:45:58.323189   24140 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0222 21:45:58.353343   24140 out.go:177] * Verifying Kubernetes components...
	I0222 21:45:58.410188   24140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 21:45:58.471043   24140 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0222 21:45:58.528950   24140 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0222 21:45:58.479435   24140 addons.go:227] Setting addon default-storageclass=true in "newest-cni-150000"
	I0222 21:45:58.493026   24140 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	W0222 21:45:58.529002   24140 addons.go:236] addon default-storageclass should already be in state true
	I0222 21:45:58.493054   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:58.529053   24140 host.go:66] Checking if "newest-cni-150000" exists ...
	I0222 21:45:58.507997   24140 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0222 21:45:58.530376   24140 cli_runner.go:164] Run: docker container inspect newest-cni-150000 --format={{.State.Status}}
	I0222 21:45:58.550580   24140 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 21:45:58.608183   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0222 21:45:58.608201   24140 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0222 21:45:58.629864   24140 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0222 21:45:58.608227   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0222 21:45:58.608315   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:58.667390   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0222 21:45:58.667417   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0222 21:45:58.667461   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:58.668220   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:58.686046   24140 api_server.go:51] waiting for apiserver process to appear ...
	I0222 21:45:58.686318   24140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 21:45:58.687120   24140 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0222 21:45:58.687141   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0222 21:45:58.687244   24140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-150000
	I0222 21:45:58.707255   24140 api_server.go:71] duration metric: took 384.004547ms to wait for apiserver process to appear ...
	I0222 21:45:58.707280   24140 api_server.go:87] waiting for apiserver healthz status ...
	I0222 21:45:58.707297   24140 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56129/healthz ...
	I0222 21:45:58.715748   24140 api_server.go:278] https://127.0.0.1:56129/healthz returned 200:
	ok
	I0222 21:45:58.718207   24140 api_server.go:140] control plane version: v1.26.1
	I0222 21:45:58.718240   24140 api_server.go:130] duration metric: took 10.938836ms to wait for apiserver health ...
	I0222 21:45:58.718255   24140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0222 21:45:58.730396   24140 system_pods.go:59] 9 kube-system pods found
	I0222 21:45:58.730442   24140 system_pods.go:61] "coredns-787d4945fb-6bkjk" [82609059-08bc-4f4d-9caa-f9f90e9ce479] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0222 21:45:58.730461   24140 system_pods.go:61] "coredns-787d4945fb-vdk88" [7561c10d-40b9-4c2f-bd92-ead3653253b3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0222 21:45:58.730474   24140 system_pods.go:61] "etcd-newest-cni-150000" [ca474e26-ceae-4d8a-b14a-73692130f540] Running
	I0222 21:45:58.730485   24140 system_pods.go:61] "kube-apiserver-newest-cni-150000" [0a00499c-83fc-4e98-9f05-316e0714d496] Running
	I0222 21:45:58.730509   24140 system_pods.go:61] "kube-controller-manager-newest-cni-150000" [627b597b-67c5-4c3d-b415-f5c03476dc17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0222 21:45:58.730520   24140 system_pods.go:61] "kube-proxy-2bd7b" [e4c86cb7-8397-4b1e-bb0b-f3d4ad7732ee] Running
	I0222 21:45:58.730527   24140 system_pods.go:61] "kube-scheduler-newest-cni-150000" [0ee7d122-4145-412b-8174-2a0f67bc241d] Running
	I0222 21:45:58.730535   24140 system_pods.go:61] "metrics-server-7997d45854-plccr" [899cf5d0-d949-4a3c-af3e-4a32b8a99803] Pending
	I0222 21:45:58.730544   24140 system_pods.go:61] "storage-provisioner" [b258e92d-3ba6-4d75-b47f-99f80c641666] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0222 21:45:58.730552   24140 system_pods.go:74] duration metric: took 12.289134ms to wait for pod list to return data ...
	I0222 21:45:58.730567   24140 default_sa.go:34] waiting for default service account to be created ...
	I0222 21:45:58.761116   24140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56130 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/newest-cni-150000/id_rsa Username:docker}
	I0222 21:45:58.761586   24140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56130 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/newest-cni-150000/id_rsa Username:docker}
	I0222 21:45:58.763363   24140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56130 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/newest-cni-150000/id_rsa Username:docker}
	I0222 21:45:58.774130   24140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56130 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/newest-cni-150000/id_rsa Username:docker}
	I0222 21:45:58.779560   24140 default_sa.go:45] found service account: "default"
	I0222 21:45:58.779576   24140 default_sa.go:55] duration metric: took 49.000765ms for default service account to be created ...
	I0222 21:45:58.779584   24140 kubeadm.go:578] duration metric: took 456.343952ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0222 21:45:58.779597   24140 node_conditions.go:102] verifying NodePressure condition ...
	I0222 21:45:58.782270   24140 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0222 21:45:58.782288   24140 node_conditions.go:123] node cpu capacity is 6
	I0222 21:45:58.782299   24140 node_conditions.go:105] duration metric: took 2.667301ms to run NodePressure ...
	I0222 21:45:58.782316   24140 start.go:228] waiting for startup goroutines ...
	I0222 21:45:58.997327   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0222 21:45:58.997343   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0222 21:45:58.998688   24140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0222 21:45:59.002329   24140 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0222 21:45:59.002343   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0222 21:45:59.023314   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0222 21:45:59.023329   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0222 21:45:59.078889   24140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0222 21:45:59.080323   24140 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0222 21:45:59.080342   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0222 21:45:59.097335   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0222 21:45:59.097353   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0222 21:45:59.183498   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0222 21:45:59.183514   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0222 21:45:59.186168   24140 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0222 21:45:59.186185   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0222 21:45:59.209367   24140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0222 21:45:59.279606   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0222 21:45:59.279632   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0222 21:45:59.305286   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0222 21:45:59.305307   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0222 21:45:59.485744   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0222 21:45:59.485759   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0222 21:45:59.583795   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0222 21:45:59.583812   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0222 21:45:59.608806   24140 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0222 21:45:59.608824   24140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0222 21:45:59.692736   24140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0222 21:46:00.611768   24140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.613009551s)
	I0222 21:46:00.611816   24140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.532875646s)
	I0222 21:46:00.621111   24140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.411692238s)
	I0222 21:46:00.621130   24140 addons.go:457] Verifying addon metrics-server=true in "newest-cni-150000"
	I0222 21:46:00.779253   24140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.086461162s)
	I0222 21:46:00.804830   24140 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-150000 addons enable metrics-server	
	
	
	I0222 21:46:00.878782   24140 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0222 21:46:00.937550   24140 addons.go:492] enable addons completed in 2.626316507s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0222 21:46:00.937635   24140 start.go:233] waiting for cluster config update ...
	I0222 21:46:00.937668   24140 start.go:242] writing updated cluster config ...
	I0222 21:46:00.938261   24140 ssh_runner.go:195] Run: rm -f paused
	I0222 21:46:00.978860   24140 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0222 21:46:00.999875   24140 out.go:177] * Done! kubectl is now configured to use "newest-cni-150000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 05:21:27 UTC, end at Thu 2023-02-23 05:48:29 UTC. --
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.360651674Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.361101847Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.361152173Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362113546Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362160116Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362184831Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362195048Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362224721Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362304394Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362362371Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362385432Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362403799Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362704899Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362772253Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.362790217Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.363289477Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.370809711Z" level=info msg="Loading containers: start."
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.448406285Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.481550543Z" level=info msg="Loading containers: done."
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.490307846Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.490373476Z" level=info msg="Daemon has completed initialization"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.513109070Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 05:21:30 old-k8s-version-865000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.517014469Z" level=info msg="API listen on [::]:2376"
	Feb 23 05:21:30 old-k8s-version-865000 dockerd[639]: time="2023-02-23T05:21:30.523278664Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-02-23T05:48:31Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Feb23 05:11] hrtimer: interrupt took 1057500 ns
	
	* 
	* ==> kernel <==
	*  05:48:32 up  1:47,  0 users,  load average: 0.30, 0.59, 0.86
	Linux old-k8s-version-865000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 05:21:27 UTC, end at Thu 2023-02-23 05:48:32 UTC. --
	Feb 23 05:48:30 old-k8s-version-865000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 05:48:31 old-k8s-version-865000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Feb 23 05:48:31 old-k8s-version-865000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 05:48:31 old-k8s-version-865000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 05:48:31 old-k8s-version-865000 kubelet[34317]: I0223 05:48:31.265830   34317 server.go:410] Version: v1.16.0
	Feb 23 05:48:31 old-k8s-version-865000 kubelet[34317]: I0223 05:48:31.266126   34317 plugins.go:100] No cloud provider specified.
	Feb 23 05:48:31 old-k8s-version-865000 kubelet[34317]: I0223 05:48:31.266186   34317 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 05:48:31 old-k8s-version-865000 kubelet[34317]: I0223 05:48:31.267865   34317 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 05:48:31 old-k8s-version-865000 kubelet[34317]: W0223 05:48:31.268581   34317 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 05:48:31 old-k8s-version-865000 kubelet[34317]: W0223 05:48:31.268651   34317 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 05:48:31 old-k8s-version-865000 kubelet[34317]: F0223 05:48:31.268675   34317 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 05:48:31 old-k8s-version-865000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 05:48:31 old-k8s-version-865000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 05:48:31 old-k8s-version-865000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Feb 23 05:48:31 old-k8s-version-865000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 05:48:31 old-k8s-version-865000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 05:48:32 old-k8s-version-865000 kubelet[34342]: I0223 05:48:32.027911   34342 server.go:410] Version: v1.16.0
	Feb 23 05:48:32 old-k8s-version-865000 kubelet[34342]: I0223 05:48:32.028266   34342 plugins.go:100] No cloud provider specified.
	Feb 23 05:48:32 old-k8s-version-865000 kubelet[34342]: I0223 05:48:32.028282   34342 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 05:48:32 old-k8s-version-865000 kubelet[34342]: I0223 05:48:32.031694   34342 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 05:48:32 old-k8s-version-865000 kubelet[34342]: W0223 05:48:32.032283   34342 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 05:48:32 old-k8s-version-865000 kubelet[34342]: W0223 05:48:32.032346   34342 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 05:48:32 old-k8s-version-865000 kubelet[34342]: F0223 05:48:32.032369   34342 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 05:48:32 old-k8s-version-865000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 05:48:32 old-k8s-version-865000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 21:48:32.007158   24560 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 2 (391.365597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-865000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.72s)

                                                
                                    

Test pass (272/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.73
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.26.1/json-events 11.44
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.66
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
18 TestDownloadOnlyKic 2.12
19 TestBinaryMirror 1.66
20 TestOffline 49.05
22 TestAddons/Setup 139.35
26 TestAddons/parallel/MetricsServer 5.61
27 TestAddons/parallel/HelmTiller 12.79
29 TestAddons/parallel/CSI 50.41
30 TestAddons/parallel/Headlamp 13.74
31 TestAddons/parallel/CloudSpanner 5.47
34 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/StoppedEnableDisable 11.39
36 TestCertOptions 33.66
37 TestCertExpiration 270.59
38 TestDockerFlags 37.34
39 TestForceSystemdFlag 34.92
40 TestForceSystemdEnv 33.13
42 TestHyperKitDriverInstallOrUpdate 5.51
45 TestErrorSpam/setup 34.25
46 TestErrorSpam/start 2.3
47 TestErrorSpam/status 1.24
48 TestErrorSpam/pause 1.71
49 TestErrorSpam/unpause 1.81
50 TestErrorSpam/stop 2.78
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 44.63
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 40.01
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 8.44
62 TestFunctional/serial/CacheCmd/cache/add_local 1.71
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.91
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.53
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.8
70 TestFunctional/serial/ExtraConfig 44.95
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 3.09
73 TestFunctional/serial/LogsFileCmd 3.12
75 TestFunctional/parallel/ConfigCmd 0.44
76 TestFunctional/parallel/DashboardCmd 13.87
77 TestFunctional/parallel/DryRun 1.59
78 TestFunctional/parallel/InternationalLanguage 0.83
79 TestFunctional/parallel/StatusCmd 1.28
84 TestFunctional/parallel/AddonsCmd 0.25
85 TestFunctional/parallel/PersistentVolumeClaim 26.54
87 TestFunctional/parallel/SSHCmd 1.01
88 TestFunctional/parallel/CpCmd 1.95
89 TestFunctional/parallel/MySQL 22.2
90 TestFunctional/parallel/FileSync 0.52
91 TestFunctional/parallel/CertSync 2.84
95 TestFunctional/parallel/NodeLabels 0.13
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
99 TestFunctional/parallel/License 0.76
100 TestFunctional/parallel/Version/short 0.09
101 TestFunctional/parallel/Version/components 0.63
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
106 TestFunctional/parallel/ImageCommands/ImageBuild 5.49
107 TestFunctional/parallel/ImageCommands/Setup 3.13
108 TestFunctional/parallel/DockerEnv/bash 2.1
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.48
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.78
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.8
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.89
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.25
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.89
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.52
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.2
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
129 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 1.1
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
131 TestFunctional/parallel/ProfileCmd/profile_list 0.52
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
133 TestFunctional/parallel/MountCmd/any-port 10.76
134 TestFunctional/parallel/MountCmd/specific-port 2.57
135 TestFunctional/delete_addon-resizer_images 0.15
136 TestFunctional/delete_my-image_image 0.06
137 TestFunctional/delete_minikube_cached_images 0.06
141 TestImageBuild/serial/NormalBuild 2.3
142 TestImageBuild/serial/BuildWithBuildArg 0.97
143 TestImageBuild/serial/BuildWithDockerIgnore 0.47
144 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.4
154 TestJSONOutput/start/Command 45.9
155 TestJSONOutput/start/Audit 0
157 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/pause/Command 0.66
161 TestJSONOutput/pause/Audit 0
163 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/unpause/Command 0.6
167 TestJSONOutput/unpause/Audit 0
169 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/stop/Command 5.84
173 TestJSONOutput/stop/Audit 0
175 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
177 TestErrorJSONOutput 0.73
179 TestKicCustomNetwork/create_custom_network 32.34
180 TestKicCustomNetwork/use_default_bridge_network 31.01
181 TestKicExistingNetwork 30.98
182 TestKicCustomSubnet 35.03
183 TestKicStaticIP 32.17
184 TestMainNoArgs 0.07
185 TestMinikubeProfile 63.02
188 TestMountStart/serial/StartWithMountFirst 8.32
189 TestMountStart/serial/VerifyMountFirst 0.4
190 TestMountStart/serial/StartWithMountSecond 8.14
191 TestMountStart/serial/VerifyMountSecond 0.4
192 TestMountStart/serial/DeleteFirst 2.12
193 TestMountStart/serial/VerifyMountPostDelete 0.39
194 TestMountStart/serial/Stop 1.57
195 TestMountStart/serial/RestartStopped 6.11
196 TestMountStart/serial/VerifyMountPostStop 0.4
199 TestMultiNode/serial/FreshStart2Nodes 78.62
202 TestMultiNode/serial/AddNode 21.4
203 TestMultiNode/serial/ProfileList 0.47
204 TestMultiNode/serial/CopyFile 14.95
205 TestMultiNode/serial/StopNode 3.06
206 TestMultiNode/serial/StartAfterStop 13.13
207 TestMultiNode/serial/RestartKeepsNodes 86.09
208 TestMultiNode/serial/DeleteNode 6.21
209 TestMultiNode/serial/StopMultiNode 21.92
210 TestMultiNode/serial/RestartMultiNode 53.61
211 TestMultiNode/serial/ValidateNameConflict 33.4
215 TestPreload 137.64
217 TestScheduledStopUnix 107.72
218 TestSkaffold 65.74
220 TestInsufficientStorage 14.49
236 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 10.43
237 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.97
238 TestStoppedBinaryUpgrade/Setup 1.64
240 TestStoppedBinaryUpgrade/MinikubeLogs 3.51
242 TestPause/serial/Start 49.38
243 TestPause/serial/SecondStartNoReconfiguration 45.3
244 TestPause/serial/Pause 0.7
245 TestPause/serial/VerifyStatus 0.42
246 TestPause/serial/Unpause 0.66
247 TestPause/serial/PauseAgain 0.81
248 TestPause/serial/DeletePaused 2.97
249 TestPause/serial/VerifyDeletedResources 2.65
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.67
259 TestNoKubernetes/serial/StartWithK8s 37.6
260 TestNoKubernetes/serial/StartWithStopK8s 9.1
261 TestNoKubernetes/serial/Start 7.49
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
263 TestNoKubernetes/serial/ProfileList 15.79
264 TestNoKubernetes/serial/Stop 1.61
265 TestNoKubernetes/serial/StartNoArgs 5.26
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
267 TestNetworkPlugins/group/auto/Start 49.35
268 TestNetworkPlugins/group/auto/KubeletFlags 0.41
269 TestNetworkPlugins/group/auto/NetCatPod 11.19
270 TestNetworkPlugins/group/auto/DNS 0.13
271 TestNetworkPlugins/group/auto/Localhost 0.11
272 TestNetworkPlugins/group/auto/HairPin 0.12
273 TestNetworkPlugins/group/kindnet/Start 57.32
274 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
275 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
276 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
277 TestNetworkPlugins/group/kindnet/DNS 0.14
278 TestNetworkPlugins/group/kindnet/Localhost 0.11
279 TestNetworkPlugins/group/kindnet/HairPin 0.12
280 TestNetworkPlugins/group/calico/Start 81.37
281 TestNetworkPlugins/group/custom-flannel/Start 61.39
282 TestNetworkPlugins/group/calico/ControllerPod 5.02
283 TestNetworkPlugins/group/calico/KubeletFlags 0.44
284 TestNetworkPlugins/group/calico/NetCatPod 15.24
285 TestNetworkPlugins/group/calico/DNS 0.14
286 TestNetworkPlugins/group/calico/Localhost 0.14
287 TestNetworkPlugins/group/calico/HairPin 0.12
288 TestNetworkPlugins/group/false/Start 52.27
289 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.62
290 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.28
291 TestNetworkPlugins/group/custom-flannel/DNS 0.14
292 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
293 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
294 TestNetworkPlugins/group/enable-default-cni/Start 51.22
295 TestNetworkPlugins/group/false/KubeletFlags 0.44
296 TestNetworkPlugins/group/false/NetCatPod 11.23
297 TestNetworkPlugins/group/false/DNS 0.14
298 TestNetworkPlugins/group/false/Localhost 0.12
299 TestNetworkPlugins/group/false/HairPin 0.13
300 TestNetworkPlugins/group/flannel/Start 58.79
301 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.54
302 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.22
303 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
304 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
305 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
306 TestNetworkPlugins/group/bridge/Start 57.35
307 TestNetworkPlugins/group/flannel/ControllerPod 5.01
308 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
309 TestNetworkPlugins/group/flannel/NetCatPod 12.18
310 TestNetworkPlugins/group/flannel/DNS 0.13
311 TestNetworkPlugins/group/flannel/Localhost 0.11
312 TestNetworkPlugins/group/flannel/HairPin 0.12
313 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
314 TestNetworkPlugins/group/bridge/NetCatPod 13.23
315 TestNetworkPlugins/group/kubenet/Start 52.25
316 TestNetworkPlugins/group/bridge/DNS 0.44
317 TestNetworkPlugins/group/bridge/Localhost 0.16
318 TestNetworkPlugins/group/bridge/HairPin 0.15
321 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
322 TestNetworkPlugins/group/kubenet/NetCatPod 11.2
323 TestNetworkPlugins/group/kubenet/DNS 0.13
324 TestNetworkPlugins/group/kubenet/Localhost 0.12
325 TestNetworkPlugins/group/kubenet/HairPin 0.11
327 TestStartStop/group/no-preload/serial/FirstStart 59.16
328 TestStartStop/group/no-preload/serial/DeployApp 9.27
329 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
330 TestStartStop/group/no-preload/serial/Stop 10.95
331 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.38
332 TestStartStop/group/no-preload/serial/SecondStart 305.34
335 TestStartStop/group/old-k8s-version/serial/Stop 1.57
336 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.39
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.07
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.45
341 TestStartStop/group/no-preload/serial/Pause 3.36
343 TestStartStop/group/embed-certs/serial/FirstStart 59.55
344 TestStartStop/group/embed-certs/serial/DeployApp 10.28
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
346 TestStartStop/group/embed-certs/serial/Stop 11.07
347 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.39
348 TestStartStop/group/embed-certs/serial/SecondStart 551.04
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.44
353 TestStartStop/group/embed-certs/serial/Pause 3.24
355 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.64
356 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
358 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.94
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.38
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 553.6
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.45
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.37
367 TestStartStop/group/newest-cni/serial/FirstStart 43.41
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
370 TestStartStop/group/newest-cni/serial/Stop 10.95
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
372 TestStartStop/group/newest-cni/serial/SecondStart 25.25
373 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.47
376 TestStartStop/group/newest-cni/serial/Pause 3.31
x
+
TestDownloadOnly/v1.16.0/json-events (16.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-639000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-639000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (16.732500656s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-639000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-639000: exit status 85 (285.74247ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-639000 | jenkins | v1.29.0 | 22 Feb 23 20:21 PST |          |
	|         | -p download-only-639000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/22 20:21:49
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0222 20:21:49.695978    3135 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:21:49.696142    3135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:21:49.696147    3135 out.go:309] Setting ErrFile to fd 2...
	I0222 20:21:49.696150    3135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:21:49.696255    3135 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	W0222 20:21:49.696358    3135 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-2664/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-2664/.minikube/config/config.json: no such file or directory
	I0222 20:21:49.697950    3135 out.go:303] Setting JSON to true
	I0222 20:21:49.716525    3135 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1284,"bootTime":1677124825,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 20:21:49.716618    3135 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 20:21:49.739561    3135 out.go:97] [download-only-639000] minikube v1.29.0 on Darwin 13.2
	I0222 20:21:49.739818    3135 notify.go:220] Checking for updates...
	W0222 20:21:49.739854    3135 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball: no such file or directory
	I0222 20:21:49.759994    3135 out.go:169] MINIKUBE_LOCATION=15909
	I0222 20:21:49.802296    3135 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:21:49.824115    3135 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 20:21:49.845446    3135 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 20:21:49.867258    3135 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	W0222 20:21:49.909080    3135 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0222 20:21:49.909450    3135 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 20:21:49.968582    3135 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 20:21:49.968692    3135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:21:50.113699    3135 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-02-23 04:21:50.018766197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:21:50.135340    3135 out.go:97] Using the docker driver based on user configuration
	I0222 20:21:50.135390    3135 start.go:296] selected driver: docker
	I0222 20:21:50.135407    3135 start.go:857] validating driver "docker" against <nil>
	I0222 20:21:50.135625    3135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:21:50.280361    3135 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-02-23 04:21:50.187983319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:21:50.280492    3135 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0222 20:21:50.284465    3135 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0222 20:21:50.284631    3135 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0222 20:21:50.305649    3135 out.go:169] Using Docker Desktop driver with root privileges
	I0222 20:21:50.326832    3135 cni.go:84] Creating CNI manager for ""
	I0222 20:21:50.326871    3135 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0222 20:21:50.326884    3135 start_flags.go:319] config:
	{Name:download-only-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-639000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:21:50.348730    3135 out.go:97] Starting control plane node download-only-639000 in cluster download-only-639000
	I0222 20:21:50.348823    3135 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 20:21:50.369564    3135 out.go:97] Pulling base image ...
	I0222 20:21:50.369703    3135 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 20:21:50.369794    3135 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 20:21:50.426228    3135 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0222 20:21:50.426470    3135 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0222 20:21:50.426597    3135 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0222 20:21:50.505842    3135 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0222 20:21:50.505877    3135 cache.go:57] Caching tarball of preloaded images
	I0222 20:21:50.506183    3135 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 20:21:50.528293    3135 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0222 20:21:50.528341    3135 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0222 20:21:50.749750    3135 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0222 20:21:57.337075    3135 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0222 20:21:57.337226    3135 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0222 20:21:57.883140    3135 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0222 20:21:57.883339    3135 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/download-only-639000/config.json ...
	I0222 20:21:57.883365    3135 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/download-only-639000/config.json: {Name:mkbab377d88ebadd2ff0d42beb2bba4b1d365b78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0222 20:21:57.883624    3135 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0222 20:21:57.883886    3135 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-639000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (11.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-639000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-639000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (11.442784729s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (11.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-639000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-639000: exit status 85 (289.473634ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-639000 | jenkins | v1.29.0 | 22 Feb 23 20:21 PST |          |
	|         | -p download-only-639000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-639000 | jenkins | v1.29.0 | 22 Feb 23 20:22 PST |          |
	|         | -p download-only-639000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/22 20:22:06
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0222 20:22:06.717031    3186 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:22:06.717218    3186 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:22:06.717223    3186 out.go:309] Setting ErrFile to fd 2...
	I0222 20:22:06.717227    3186 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:22:06.717342    3186 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	W0222 20:22:06.717433    3186 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-2664/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-2664/.minikube/config/config.json: no such file or directory
	I0222 20:22:06.718665    3186 out.go:303] Setting JSON to true
	I0222 20:22:06.736985    3186 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1301,"bootTime":1677124825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 20:22:06.737076    3186 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 20:22:06.758807    3186 out.go:97] [download-only-639000] minikube v1.29.0 on Darwin 13.2
	I0222 20:22:06.759017    3186 notify.go:220] Checking for updates...
	I0222 20:22:06.780850    3186 out.go:169] MINIKUBE_LOCATION=15909
	I0222 20:22:06.802743    3186 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:22:06.824777    3186 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 20:22:06.846923    3186 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 20:22:06.868784    3186 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	W0222 20:22:06.911661    3186 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0222 20:22:06.912421    3186 config.go:182] Loaded profile config "download-only-639000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0222 20:22:06.912503    3186 start.go:765] api.Load failed for download-only-639000: filestore "download-only-639000": Docker machine "download-only-639000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0222 20:22:06.912569    3186 driver.go:365] Setting default libvirt URI to qemu:///system
	W0222 20:22:06.912603    3186 start.go:765] api.Load failed for download-only-639000: filestore "download-only-639000": Docker machine "download-only-639000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0222 20:22:06.971238    3186 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 20:22:06.971333    3186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:22:07.113311    3186 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-02-23 04:22:07.020406385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:22:07.134389    3186 out.go:97] Using the docker driver based on existing profile
	I0222 20:22:07.134431    3186 start.go:296] selected driver: docker
	I0222 20:22:07.134480    3186 start.go:857] validating driver "docker" against &{Name:download-only-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-639000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0222 20:22:07.134771    3186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:22:07.278954    3186 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-02-23 04:22:07.184688844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:22:07.281384    3186 cni.go:84] Creating CNI manager for ""
	I0222 20:22:07.281405    3186 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0222 20:22:07.281416    3186 start_flags.go:319] config:
	{Name:download-only-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-639000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:22:07.302507    3186 out.go:97] Starting control plane node download-only-639000 in cluster download-only-639000
	I0222 20:22:07.302566    3186 cache.go:120] Beginning downloading kic base image for docker with docker
	I0222 20:22:07.324568    3186 out.go:97] Pulling base image ...
	I0222 20:22:07.324705    3186 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:22:07.324783    3186 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0222 20:22:07.378663    3186 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0222 20:22:07.378834    3186 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0222 20:22:07.378856    3186 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory, skipping pull
	I0222 20:22:07.378861    3186 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in cache, skipping pull
	I0222 20:22:07.378869    3186 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc as a tarball
	I0222 20:22:07.418543    3186 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0222 20:22:07.418585    3186 cache.go:57] Caching tarball of preloaded images
	I0222 20:22:07.418930    3186 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:22:07.440421    3186 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0222 20:22:07.440492    3186 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0222 20:22:07.655000    3186 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0222 20:22:15.007971    3186 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0222 20:22:15.008151    3186 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0222 20:22:15.610120    3186 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0222 20:22:15.610192    3186 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/download-only-639000/config.json ...
	I0222 20:22:15.610576    3186 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0222 20:22:15.610815    3186 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.1/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.1/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/darwin/amd64/v1.26.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-639000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-639000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.12s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-009000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-009000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-009000
--- PASS: TestDownloadOnlyKic (2.12s)

                                                
                                    
x
+
TestBinaryMirror (1.66s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-498000 --alsologtostderr --binary-mirror http://127.0.0.1:49391 --driver=docker 
aaa_download_only_test.go:308: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-498000 --alsologtostderr --binary-mirror http://127.0.0.1:49391 --driver=docker : (1.053351749s)
helpers_test.go:175: Cleaning up "binary-mirror-498000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-498000
--- PASS: TestBinaryMirror (1.66s)

                                                
                                    
x
+
TestOffline (49.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-624000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-624000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (46.30434737s)
helpers_test.go:175: Cleaning up "offline-docker-624000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-624000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-624000: (2.742216106s)
--- PASS: TestOffline (49.05s)

                                                
                                    
x
+
TestAddons/Setup (139.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-566000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-566000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m19.347831029s)
--- PASS: TestAddons/Setup (139.35s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.624739ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-45dw9" [3611a4fb-4d3e-443d-9f7a-40a0c7f2ac64] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009055766s
addons_test.go:380: (dbg) Run:  kubectl --context addons-566000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-566000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.61s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.79s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.633513ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-pszqm" [a7061575-ecaf-4ed2-81cf-42bbc4d50e77] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008883804s
addons_test.go:438: (dbg) Run:  kubectl --context addons-566000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-566000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.266284784s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-566000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 4.769184ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-566000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-566000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [80a5e154-f630-4781-9b6e-6aec50dcc62a] Pending
helpers_test.go:344: "task-pv-pod" [80a5e154-f630-4781-9b6e-6aec50dcc62a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [80a5e154-f630-4781-9b6e-6aec50dcc62a] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.008388528s
addons_test.go:549: (dbg) Run:  kubectl --context addons-566000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-566000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-566000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-566000 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-566000 delete pod task-pv-pod: (1.025428673s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-566000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-566000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-566000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-566000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f877c296-1b46-4361-9727-a633c4b5a74f] Pending
helpers_test.go:344: "task-pv-pod-restore" [f877c296-1b46-4361-9727-a633c4b5a74f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f877c296-1b46-4361-9727-a633c4b5a74f] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007498487s
addons_test.go:591: (dbg) Run:  kubectl --context addons-566000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-566000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-566000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-566000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-566000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.537656531s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-566000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-566000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-566000 --alsologtostderr -v=1: (1.733058858s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-qg82h" [778e86f4-7c20-4077-8147-d923cd9ed12f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-qg82h" [778e86f4-7c20-4077-8147-d923cd9ed12f] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.009142514s
--- PASS: TestAddons/parallel/Headlamp (13.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-sqglx" [22beea43-e104-4f21-a71e-0d7b5fb49ab9] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00773523s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-566000
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-566000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-566000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-566000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-566000: (10.963655812s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-566000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-566000
--- PASS: TestAddons/StoppedEnableDisable (11.39s)

                                                
                                    
x
+
TestCertOptions (33.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-921000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-921000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (30.170041139s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-921000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-921000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-921000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-921000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-921000: (2.590958938s)
--- PASS: TestCertOptions (33.66s)

                                                
                                    
x
+
TestCertExpiration (270.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-370000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-370000 --memory=2048 --cert-expiration=3m --driver=docker : (30.969536028s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-370000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0222 21:01:40.454263    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-370000 --memory=2048 --cert-expiration=8760h --driver=docker : (57.040811996s)
helpers_test.go:175: Cleaning up "cert-expiration-370000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-370000
E0222 21:02:21.415916    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-370000: (2.582612157s)
--- PASS: TestCertExpiration (270.59s)

                                                
                                    
x
+
TestDockerFlags (37.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-347000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-347000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (33.800509784s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-347000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-347000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-347000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-347000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-347000: (2.660342626s)
--- PASS: TestDockerFlags (37.34s)

                                                
                                    
x
+
TestForceSystemdFlag (34.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-741000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-741000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (31.58767236s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-741000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-741000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-741000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-741000: (2.874647606s)
--- PASS: TestForceSystemdFlag (34.92s)

                                                
                                    
x
+
TestForceSystemdEnv (33.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-804000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-804000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (29.995828303s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-804000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-804000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-804000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-804000: (2.664086007s)
--- PASS: TestForceSystemdEnv (33.13s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (5.51s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (5.51s)

                                                
                                    
x
+
TestErrorSpam/setup (34.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-638000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-638000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 --driver=docker : (34.24996695s)
--- PASS: TestErrorSpam/setup (34.25s)

                                                
                                    
x
+
TestErrorSpam/start (2.3s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 start --dry-run
--- PASS: TestErrorSpam/start (2.30s)

                                                
                                    
x
+
TestErrorSpam/status (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 status
--- PASS: TestErrorSpam/status (1.24s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (2.78s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 stop: (2.149951889s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-638000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-638000 stop
--- PASS: TestErrorSpam/stop (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/test/nested/copy/3133/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-106000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2199: (dbg) Done: out/minikube-darwin-amd64 start -p functional-106000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (44.629621137s)
--- PASS: TestFunctional/serial/StartWithProxy (44.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-106000 --alsologtostderr -v=8
functional_test.go:653: (dbg) Done: out/minikube-darwin-amd64 start -p functional-106000 --alsologtostderr -v=8: (40.009412315s)
functional_test.go:657: soft start took 40.010024935s for "functional-106000" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-106000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 cache add k8s.gcr.io/pause:3.1: (2.907936092s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 cache add k8s.gcr.io/pause:3.3: (2.908617446s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 cache add k8s.gcr.io/pause:latest
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 cache add k8s.gcr.io/pause:latest: (2.627499695s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-106000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local300978475/001
functional_test.go:1083: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 cache add minikube-local-cache-test:functional-106000
functional_test.go:1083: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 cache add minikube-local-cache-test:functional-106000: (1.173869036s)
functional_test.go:1088: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 cache delete minikube-local-cache-test:functional-106000
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-106000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-106000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (392.896729ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 cache reload
functional_test.go:1152: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 cache reload: (1.65994731s)
functional_test.go:1157: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 kubectl -- --context functional-106000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-106000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.80s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-106000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0222 20:29:42.945660    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:42.951910    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:42.964034    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:42.984115    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:43.024463    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:43.106693    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:43.268689    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:43.589434    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:44.229893    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:45.510068    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:29:48.071432    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
functional_test.go:751: (dbg) Done: out/minikube-darwin-amd64 start -p functional-106000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.948970181s)
functional_test.go:755: restart took 44.949179739s for "functional-106000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-106000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 logs
E0222 20:29:53.192278    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
functional_test.go:1230: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 logs: (3.089715693s)
--- PASS: TestFunctional/serial/LogsCmd (3.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3314005479/001/logs.txt
functional_test.go:1244: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3314005479/001/logs.txt: (3.119114499s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-106000 config get cpus: exit status 14 (50.650843ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 config set cpus 2
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-106000 config get cpus: exit status 14 (48.109109ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-106000 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-106000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 5733: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-106000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:968: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-106000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (766.478086ms)

                                                
                                                
-- stdout --
	* [functional-106000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 20:31:02.949467    5581 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:31:02.949677    5581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:31:02.949682    5581 out.go:309] Setting ErrFile to fd 2...
	I0222 20:31:02.949686    5581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:31:02.949794    5581 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 20:31:02.951200    5581 out.go:303] Setting JSON to false
	I0222 20:31:02.971066    5581 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1837,"bootTime":1677124825,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 20:31:02.971148    5581 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 20:31:02.995538    5581 out.go:177] * [functional-106000] minikube v1.29.0 on Darwin 13.2
	I0222 20:31:03.039636    5581 notify.go:220] Checking for updates...
	I0222 20:31:03.061237    5581 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 20:31:03.082415    5581 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:31:03.124196    5581 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 20:31:03.145418    5581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 20:31:03.166434    5581 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 20:31:03.187294    5581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 20:31:03.208831    5581 config.go:182] Loaded profile config "functional-106000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:31:03.209186    5581 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 20:31:03.305875    5581 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 20:31:03.305997    5581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:31:03.465621    5581 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 04:31:03.222963113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:31:03.524134    5581 out.go:177] * Using the docker driver based on existing profile
	I0222 20:31:03.546372    5581 start.go:296] selected driver: docker
	I0222 20:31:03.546398    5581 start.go:857] validating driver "docker" against &{Name:functional-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-106000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:31:03.546533    5581 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 20:31:03.571293    5581 out.go:177] 
	W0222 20:31:03.592304    5581 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0222 20:31:03.629260    5581 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-106000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-106000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-106000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (833.609278ms)

                                                
                                                
-- stdout --
	* [functional-106000] minikube v1.29.0 sur Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 20:31:04.534682    5641 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:31:04.534930    5641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:31:04.534936    5641 out.go:309] Setting ErrFile to fd 2...
	I0222 20:31:04.534963    5641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:31:04.535110    5641 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 20:31:04.536940    5641 out.go:303] Setting JSON to false
	I0222 20:31:04.557930    5641 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1839,"bootTime":1677124825,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0222 20:31:04.558008    5641 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0222 20:31:04.580009    5641 out.go:177] * [functional-106000] minikube v1.29.0 sur Darwin 13.2
	I0222 20:31:04.602010    5641 notify.go:220] Checking for updates...
	I0222 20:31:04.623026    5641 out.go:177]   - MINIKUBE_LOCATION=15909
	I0222 20:31:04.643808    5641 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	I0222 20:31:04.685872    5641 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0222 20:31:04.727967    5641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0222 20:31:04.770701    5641 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	I0222 20:31:04.813352    5641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0222 20:31:04.835142    5641 config.go:182] Loaded profile config "functional-106000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:31:04.835474    5641 driver.go:365] Setting default libvirt URI to qemu:///system
	I0222 20:31:04.906964    5641 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0222 20:31:04.907131    5641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0222 20:31:05.065203    5641 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 04:31:04.824133538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0222 20:31:05.087046    5641 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0222 20:31:05.128696    5641 start.go:296] selected driver: docker
	I0222 20:31:05.128712    5641 start.go:857] validating driver "docker" against &{Name:functional-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-106000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0222 20:31:05.128798    5641 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0222 20:31:05.174129    5641 out.go:177] 
	W0222 20:31:05.218197    5641 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0222 20:31:05.282036    5641 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 status
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [181df73a-d9cb-4234-9d83-5720ed0e09df] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009516427s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-106000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-106000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-106000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-106000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [75359be6-9f83-4262-9b71-0f38bd7a5487] Pending
helpers_test.go:344: "sp-pod" [75359be6-9f83-4262-9b71-0f38bd7a5487] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [75359be6-9f83-4262-9b71-0f38bd7a5487] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008955196s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-106000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-106000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-106000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bacdaab5-fcea-4a69-9b32-b3cdbc24709d] Pending
helpers_test.go:344: "sp-pod" [bacdaab5-fcea-4a69-9b32-b3cdbc24709d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bacdaab5-fcea-4a69-9b32-b3cdbc24709d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.009534847s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-106000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.54s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh -n functional-106000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 cp functional-106000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd2503500669/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh -n functional-106000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-106000 replace --force -f testdata/mysql.yaml
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-xgfnl" [72d93295-827b-40e7-b3e0-7e942a923b8f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-xgfnl" [72d93295-827b-40e7-b3e0-7e942a923b8f] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.011267018s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-106000 exec mysql-888f84dd9-xgfnl -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-106000 exec mysql-888f84dd9-xgfnl -- mysql -ppassword -e "show databases;": exit status 1 (144.947686ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-106000 exec mysql-888f84dd9-xgfnl -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-106000 exec mysql-888f84dd9-xgfnl -- mysql -ppassword -e "show databases;": exit status 1 (116.517136ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-106000 exec mysql-888f84dd9-xgfnl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/3133/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo cat /etc/test/nested/copy/3133/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/3133.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo cat /etc/ssl/certs/3133.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/3133.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo cat /usr/share/ca-certificates/3133.pem"
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/31332.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo cat /etc/ssl/certs/31332.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/31332.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo cat /usr/share/ca-certificates/31332.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-106000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-106000 ssh "sudo systemctl is-active crio": exit status 1 (437.74127ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-106000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-106000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-106000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-106000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| docker.io/library/minikube-local-cache-test | functional-106000 | c59322cc34fff | 30B    |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/google-containers/addon-resizer      | functional-106000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/localhost/my-image                | functional-106000 | da299304887a5 | 1.24MB |
|---------------------------------------------|-------------------|---------------|--------|
2023/02/22 20:31:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-106000 image ls --format json:
[{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-106000"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"e9c08e11b07f68c1805c
49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":
[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da299304887a59eb2f93c107a8be9a63af08861da2bdf1725d80d479f089e853","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-106000"],"size":"1240000"},{"id":"c59322cc34fff8b4d577493ba1832e3937f0974e39aed71be0bf9b6cefcb2bfc","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-106000"],"size":"30"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":
"95400000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-106000 image ls --format yaml:
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-106000
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: c59322cc34fff8b4d577493ba1832e3937f0974e39aed71be0bf9b6cefcb2bfc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-106000
size: "30"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-106000 ssh pgrep buildkitd: exit status 1 (390.160732ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image build -t localhost/my-image:functional-106000 testdata/build
functional_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 image build -t localhost/my-image:functional-106000 testdata/build: (4.771506663s)
functional_test.go:317: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-106000 image build -t localhost/my-image:functional-106000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 163921737eea
Removing intermediate container 163921737eea
---> eb79ef5b812c
Step 3/3 : ADD content.txt /
---> da299304887a
Successfully built da299304887a
Successfully tagged localhost/my-image:functional-106000
functional_test.go:320: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-106000 image build -t localhost/my-image:functional-106000 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.059014655s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-106000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:493: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-106000 docker-env) && out/minikube-darwin-amd64 status -p functional-106000"
functional_test.go:493: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-106000 docker-env) && out/minikube-darwin-amd64 status -p functional-106000": (1.192659142s)
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-106000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image load --daemon gcr.io/google-containers/addon-resizer:functional-106000
functional_test.go:352: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 image load --daemon gcr.io/google-containers/addon-resizer:functional-106000: (3.481313976s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image load --daemon gcr.io/google-containers/addon-resizer:functional-106000
functional_test.go:362: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 image load --daemon gcr.io/google-containers/addon-resizer:functional-106000: (2.38327214s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.574106231s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-106000
functional_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image load --daemon gcr.io/google-containers/addon-resizer:functional-106000
functional_test.go:242: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 image load --daemon gcr.io/google-containers/addon-resizer:functional-106000: (2.935575519s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image save gcr.io/google-containers/addon-resizer:functional-106000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 image save gcr.io/google-containers/addon-resizer:functional-106000 /Users/jenkins/workspace/addon-resizer-save.tar: (1.252763366s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image rm gcr.io/google-containers/addon-resizer:functional-106000
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.476640189s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-106000
functional_test.go:421: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 image save --daemon gcr.io/google-containers/addon-resizer:functional-106000
functional_test.go:421: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 image save --daemon gcr.io/google-containers/addon-resizer:functional-106000: (3.386609299s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-106000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-106000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-106000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3ffed1ab-8330-477f-b4e3-8009bb966fd0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0222 20:30:23.911738    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [3ffed1ab-8330-477f-b4e3-8009bb966fd0] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.008562484s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-106000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-106000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 5322: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 service list -o json
functional_test.go:1547: (dbg) Done: out/minikube-darwin-amd64 -p functional-106000 service list -o json: (1.098676107s)
functional_test.go:1552: Took "1.098790447s" to run "out/minikube-darwin-amd64 -p functional-106000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1312: Took "444.284471ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1326: Took "74.54909ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1363: Took "425.874177ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1376: Took "68.58883ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-106000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2715770295/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677126654433369000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2715770295/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677126654433369000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2715770295/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677126654433369000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2715770295/001/test-1677126654433369000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-106000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (432.04029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 23 04:30 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 23 04:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 23 04:30 test-1677126654433369000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh cat /mount-9p/test-1677126654433369000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-106000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dccc8765-4595-4eb4-a908-8940895e6ad1] Pending
helpers_test.go:344: "busybox-mount" [dccc8765-4595-4eb4-a908-8940895e6ad1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dccc8765-4595-4eb4-a908-8940895e6ad1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dccc8765-4595-4eb4-a908-8940895e6ad1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.008231665s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-106000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo umount -f /mount-9p"
E0222 20:31:05.010151    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-106000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2715770295/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-106000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2564316044/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-106000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (429.055776ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-106000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2564316044/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-106000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-106000 ssh "sudo umount -f /mount-9p": exit status 1 (517.403967ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-106000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-106000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2564316044/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.57s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-106000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-106000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-106000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-593000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-593000: (2.296583575s)
--- PASS: TestImageBuild/serial/NormalBuild (2.30s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-593000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-593000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-593000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.40s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-566000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0222 20:40:03.146862    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:40:30.831823    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-566000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (45.902824119s)
--- PASS: TestJSONOutput/start/Command (45.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-566000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-566000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-566000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-566000 --output=json --user=testUser: (5.84066193s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.73s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-593000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-593000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (346.703429ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0666204d-92dd-43eb-93cc-617eefc9889c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-593000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a81093e-acb7-4323-b79c-308c58109658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"3a46a6dd-7aa5-4c98-97fd-75b702af5c9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig"}}
	{"specversion":"1.0","id":"d6e94835-2767-4de3-bdf1-ec958eff51b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"dd35073d-db96-4c5d-bc18-3c3eefe24607","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79cf1baf-d682-49d4-9a12-71f161dc17a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube"}}
	{"specversion":"1.0","id":"4cb84ac7-687e-4260-8aae-0677d53398ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"682b589f-ffc6-4875-8ec7-fa593bb614dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-593000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-593000
--- PASS: TestErrorJSONOutput (0.73s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-110000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-110000 --network=: (29.739352478s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-110000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-110000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-110000: (2.544428748s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.34s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-161000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-161000 --network=bridge: (28.524647713s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-161000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-161000: (2.430684485s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.01s)

                                                
                                    
x
+
TestKicExistingNetwork (30.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-664000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-664000 --network=existing-network: (28.187477789s)
helpers_test.go:175: Cleaning up "existing-network-664000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-664000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-664000: (2.434528754s)
--- PASS: TestKicExistingNetwork (30.98s)

                                                
                                    
x
+
TestKicCustomSubnet (35.03s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-203000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-203000 --subnet=192.168.60.0/24: (32.565139986s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-203000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-203000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-203000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-203000: (2.404638847s)
--- PASS: TestKicCustomSubnet (35.03s)

                                                
                                    
x
+
TestKicStaticIP (32.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-807000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-807000 --static-ip=192.168.200.200: (29.329162266s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-807000 ip
helpers_test.go:175: Cleaning up "static-ip-807000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-807000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-807000: (2.607741797s)
--- PASS: TestKicStaticIP (32.17s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (63.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-003000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-003000 --driver=docker : (28.46767748s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-005000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-005000 --driver=docker : (27.555464304s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-003000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-005000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-005000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-005000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-005000: (2.596476281s)
helpers_test.go:175: Cleaning up "first-003000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-003000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-003000: (2.581888029s)
--- PASS: TestMinikubeProfile (63.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-599000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-599000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.313684023s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-599000 ssh -- ls /minikube-host
E0222 20:44:43.074088    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-621000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-621000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.138208449s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-621000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-599000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-599000 --alsologtostderr -v=5: (2.120789717s)
--- PASS: TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-621000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-621000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-621000: (1.570399745s)
--- PASS: TestMountStart/serial/Stop (1.57s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-621000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-621000: (5.111714855s)
--- PASS: TestMountStart/serial/RestartStopped (6.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-621000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-216000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0222 20:46:06.121853    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-216000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m17.910019684s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.62s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-216000 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-216000 -v 3 --alsologtostderr: (20.376525204s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-216000 status --alsologtostderr: (1.018624658s)
--- PASS: TestMultiNode/serial/AddNode (21.40s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-216000 status --output json --alsologtostderr: (1.192195029s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp testdata/cp-test.txt multinode-216000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp multinode-216000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile510455077/001/cp-test_multinode-216000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp multinode-216000:/home/docker/cp-test.txt multinode-216000-m02:/home/docker/cp-test_multinode-216000_multinode-216000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m02 "sudo cat /home/docker/cp-test_multinode-216000_multinode-216000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp multinode-216000:/home/docker/cp-test.txt multinode-216000-m03:/home/docker/cp-test_multinode-216000_multinode-216000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m03 "sudo cat /home/docker/cp-test_multinode-216000_multinode-216000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp testdata/cp-test.txt multinode-216000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp multinode-216000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile510455077/001/cp-test_multinode-216000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp multinode-216000-m02:/home/docker/cp-test.txt multinode-216000:/home/docker/cp-test_multinode-216000-m02_multinode-216000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000 "sudo cat /home/docker/cp-test_multinode-216000-m02_multinode-216000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp multinode-216000-m02:/home/docker/cp-test.txt multinode-216000-m03:/home/docker/cp-test_multinode-216000-m02_multinode-216000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m03 "sudo cat /home/docker/cp-test_multinode-216000-m02_multinode-216000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp testdata/cp-test.txt multinode-216000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp multinode-216000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile510455077/001/cp-test_multinode-216000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp multinode-216000-m03:/home/docker/cp-test.txt multinode-216000:/home/docker/cp-test_multinode-216000-m03_multinode-216000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000 "sudo cat /home/docker/cp-test_multinode-216000-m03_multinode-216000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 cp multinode-216000-m03:/home/docker/cp-test.txt multinode-216000-m02:/home/docker/cp-test_multinode-216000-m03_multinode-216000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 ssh -n multinode-216000-m02 "sudo cat /home/docker/cp-test_multinode-216000-m03_multinode-216000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-216000 node stop m03: (1.531004489s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-216000 status: exit status 7 (768.915617ms)

                                                
                                                
-- stdout --
	multinode-216000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-216000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-216000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-216000 status --alsologtostderr: exit status 7 (761.093697ms)

                                                
                                                
-- stdout --
	multinode-216000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-216000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-216000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 20:47:16.834536    9505 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:47:16.834738    9505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:47:16.834743    9505 out.go:309] Setting ErrFile to fd 2...
	I0222 20:47:16.834747    9505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:47:16.834892    9505 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 20:47:16.835106    9505 out.go:303] Setting JSON to false
	I0222 20:47:16.835143    9505 mustload.go:65] Loading cluster: multinode-216000
	I0222 20:47:16.835209    9505 notify.go:220] Checking for updates...
	I0222 20:47:16.835444    9505 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:47:16.835458    9505 status.go:255] checking status of multinode-216000 ...
	I0222 20:47:16.835880    9505 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:47:16.893689    9505 status.go:330] multinode-216000 host status = "Running" (err=<nil>)
	I0222 20:47:16.893713    9505 host.go:66] Checking if "multinode-216000" exists ...
	I0222 20:47:16.894044    9505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000
	I0222 20:47:16.953574    9505 host.go:66] Checking if "multinode-216000" exists ...
	I0222 20:47:16.953891    9505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 20:47:16.953954    9505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:47:17.013323    9505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51081 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000/id_rsa Username:docker}
	I0222 20:47:17.103865    9505 ssh_runner.go:195] Run: systemctl --version
	I0222 20:47:17.108530    9505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:47:17.118108    9505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-216000
	I0222 20:47:17.177487    9505 kubeconfig.go:92] found "multinode-216000" server: "https://127.0.0.1:51085"
	I0222 20:47:17.177514    9505 api_server.go:165] Checking apiserver status ...
	I0222 20:47:17.177567    9505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0222 20:47:17.187746    9505 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1920/cgroup
	W0222 20:47:17.196401    9505 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1920/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0222 20:47:17.196465    9505 ssh_runner.go:195] Run: ls
	I0222 20:47:17.200303    9505 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51085/healthz ...
	I0222 20:47:17.205229    9505 api_server.go:278] https://127.0.0.1:51085/healthz returned 200:
	ok
	I0222 20:47:17.205240    9505 status.go:421] multinode-216000 apiserver status = Running (err=<nil>)
	I0222 20:47:17.205256    9505 status.go:257] multinode-216000 status: &{Name:multinode-216000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0222 20:47:17.205268    9505 status.go:255] checking status of multinode-216000-m02 ...
	I0222 20:47:17.205514    9505 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Status}}
	I0222 20:47:17.266293    9505 status.go:330] multinode-216000-m02 host status = "Running" (err=<nil>)
	I0222 20:47:17.266314    9505 host.go:66] Checking if "multinode-216000-m02" exists ...
	I0222 20:47:17.266614    9505 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-216000-m02
	I0222 20:47:17.328308    9505 host.go:66] Checking if "multinode-216000-m02" exists ...
	I0222 20:47:17.328625    9505 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0222 20:47:17.328681    9505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-216000-m02
	I0222 20:47:17.389300    9505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51154 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/multinode-216000-m02/id_rsa Username:docker}
	I0222 20:47:17.480405    9505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0222 20:47:17.490092    9505 status.go:257] multinode-216000-m02 status: &{Name:multinode-216000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0222 20:47:17.490114    9505 status.go:255] checking status of multinode-216000-m03 ...
	I0222 20:47:17.490404    9505 cli_runner.go:164] Run: docker container inspect multinode-216000-m03 --format={{.State.Status}}
	I0222 20:47:17.549127    9505 status.go:330] multinode-216000-m03 host status = "Stopped" (err=<nil>)
	I0222 20:47:17.549148    9505 status.go:343] host is not running, skipping remaining checks
	I0222 20:47:17.549157    9505 status.go:257] multinode-216000-m03 status: &{Name:multinode-216000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-216000 node start m03 --alsologtostderr: (12.02117459s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-216000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-216000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-216000: (23.029274119s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-216000 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-216000 --wait=true -v=8 --alsologtostderr: (1m2.96044364s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-216000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-216000 node delete m03: (5.305410029s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-216000 stop: (21.596395274s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-216000 status: exit status 7 (162.881623ms)

                                                
                                                
-- stdout --
	multinode-216000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-216000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-216000 status --alsologtostderr: exit status 7 (160.733689ms)

                                                
                                                
-- stdout --
	multinode-216000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-216000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0222 20:49:24.786157   10046 out.go:296] Setting OutFile to fd 1 ...
	I0222 20:49:24.786327   10046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:49:24.786332   10046 out.go:309] Setting ErrFile to fd 2...
	I0222 20:49:24.786335   10046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0222 20:49:24.786446   10046 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
	I0222 20:49:24.786622   10046 out.go:303] Setting JSON to false
	I0222 20:49:24.786660   10046 mustload.go:65] Loading cluster: multinode-216000
	I0222 20:49:24.786750   10046 notify.go:220] Checking for updates...
	I0222 20:49:24.786990   10046 config.go:182] Loaded profile config "multinode-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0222 20:49:24.787003   10046 status.go:255] checking status of multinode-216000 ...
	I0222 20:49:24.787389   10046 cli_runner.go:164] Run: docker container inspect multinode-216000 --format={{.State.Status}}
	I0222 20:49:24.843355   10046 status.go:330] multinode-216000 host status = "Stopped" (err=<nil>)
	I0222 20:49:24.843373   10046 status.go:343] host is not running, skipping remaining checks
	I0222 20:49:24.843378   10046 status.go:257] multinode-216000 status: &{Name:multinode-216000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0222 20:49:24.843402   10046 status.go:255] checking status of multinode-216000-m02 ...
	I0222 20:49:24.843654   10046 cli_runner.go:164] Run: docker container inspect multinode-216000-m02 --format={{.State.Status}}
	I0222 20:49:24.901073   10046 status.go:330] multinode-216000-m02 host status = "Stopped" (err=<nil>)
	I0222 20:49:24.901103   10046 status.go:343] host is not running, skipping remaining checks
	I0222 20:49:24.901112   10046 status.go:257] multinode-216000-m02 status: &{Name:multinode-216000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-216000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0222 20:49:43.071094    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:50:03.139730    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-216000 --wait=true -v=8 --alsologtostderr --driver=docker : (52.687935329s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-216000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-216000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-216000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-216000-m02 --driver=docker : exit status 14 (393.526353ms)

                                                
                                                
-- stdout --
	* [multinode-216000-m02] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-216000-m02' is duplicated with machine name 'multinode-216000-m02' in profile 'multinode-216000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-216000-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-216000-m03 --driver=docker : (30.028312082s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-216000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-216000: exit status 80 (481.739333ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-216000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-216000-m03 already exists in multinode-216000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-216000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-216000-m03: (2.445167861s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.40s)

                                                
                                    
x
+
TestPreload (137.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-429000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0222 20:51:26.184682    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-429000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m11.20616215s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-429000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-429000 -- docker pull gcr.io/k8s-minikube/busybox: (2.619317926s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-429000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-429000: (10.831137383s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-429000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-429000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (49.873374493s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-429000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-429000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-429000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-429000: (2.695918508s)
--- PASS: TestPreload (137.64s)

                                                
                                    
x
+
TestScheduledStopUnix (107.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-259000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-259000 --memory=2048 --driver=docker : (33.538420011s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-259000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-259000 -n scheduled-stop-259000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-259000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-259000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-259000 -n scheduled-stop-259000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-259000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-259000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0222 20:54:43.063565    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:55:03.133697    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-259000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-259000: exit status 7 (105.234615ms)

                                                
                                                
-- stdout --
	scheduled-stop-259000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-259000 -n scheduled-stop-259000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-259000 -n scheduled-stop-259000: exit status 7 (101.285191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-259000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-259000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-259000: (2.368643766s)
--- PASS: TestScheduledStopUnix (107.72s)

                                                
                                    
x
+
TestSkaffold (65.74s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1172479451 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-593000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-593000 --memory=2600 --driver=docker : (32.530095039s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1172479451 run --minikube-profile skaffold-593000 --kube-context skaffold-593000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1172479451 run --minikube-profile skaffold-593000 --kube-context skaffold-593000 --status-check=true --port-forward=false --interactive=false: (17.488716903s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-86c99c446d-s6dzm" [b3f365f6-953e-4dd1-8a41-738497feee56] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.014621609s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-694488dc5d-x6tzn" [73592a01-d779-4f3d-8773-3c33f81d0a89] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007160023s
helpers_test.go:175: Cleaning up "skaffold-593000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-593000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-593000: (2.854681608s)
--- PASS: TestSkaffold (65.74s)

                                                
                                    
x
+
TestInsufficientStorage (14.49s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-191000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-191000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.315476879s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dc16bde7-2b73-46d4-95e9-b4cc9eed04df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-191000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"988b0b64-2ad5-47f8-9d14-f85400dcd4b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"b30c5bb1-8648-4f53-90dc-e5ec7279b8ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig"}}
	{"specversion":"1.0","id":"009dcccc-d2be-43dc-aae4-3c506a68e1c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"07a94c9d-5b93-462e-b4cd-a7e682dee74b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2832437e-d171-4c3b-a240-4e6b191aa6f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube"}}
	{"specversion":"1.0","id":"373b4f2f-85ab-43d4-94b9-d6c769790d44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"05177738-3f7b-4fde-ba51-6f259f35f4aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"dc47a06c-9541-4837-a10c-4d7faf7de7e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"16fb5680-d27d-402d-a1dc-3edf8a7173d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9da90b3-a239-4bc4-94e4-8d4291764a19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"2031796e-dc02-4abc-9626-f6f258e8175a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-191000 in cluster insufficient-storage-191000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"339fa0ac-711a-4fea-bb8d-96508da94349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"12111bb5-55d5-439b-b047-7cb7244deb95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9d37e73-a76c-4434-b413-e4bcec5b0aa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-191000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-191000 --output=json --layout=cluster: exit status 7 (393.898969ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-191000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-191000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 20:56:24.077604   11824 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-191000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-191000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-191000 --output=json --layout=cluster: exit status 7 (396.338325ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-191000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-191000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0222 20:56:24.474175   11834 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-191000" does not appear in /Users/jenkins/minikube-integration/15909-2664/kubeconfig
	E0222 20:56:24.483698   11834 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/insufficient-storage-191000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-191000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-191000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-191000: (2.381615862s)
--- PASS: TestInsufficientStorage (14.49s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2913640500/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2913640500/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2913640500/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2913640500/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.43s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.97s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current567757416/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current567757416/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current567757416/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current567757416/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-634000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-634000: (3.514609987s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.51s)

                                                
                                    
x
+
TestPause/serial/Start (49.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-766000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0222 21:03:43.335078    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-766000 --memory=2048 --install-addons=false --wait=all --driver=docker : (49.381457873s)
--- PASS: TestPause/serial/Start (49.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-766000 --alsologtostderr -v=1 --driver=docker 
E0222 21:04:43.051078    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-766000 --alsologtostderr -v=1 --driver=docker : (45.28845988s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-766000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-766000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-766000 --output=json --layout=cluster: exit status 2 (420.2959ms)

                                                
                                                
-- stdout --
	{"Name":"pause-766000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-766000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-766000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-766000 --alsologtostderr -v=5
E0222 21:05:03.120214    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-766000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-766000 --alsologtostderr -v=5: (2.966239925s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.65s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (2.467412042s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-766000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-766000: exit status 1 (58.833647ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-766000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-394000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-394000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (673.721743ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-394000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-394000 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-394000 --driver=docker : (37.172721732s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-394000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-394000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-394000 --no-kubernetes --driver=docker : (6.120507062s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-394000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-394000 status -o json: exit status 2 (420.248924ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-394000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-394000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-394000: (2.563596182s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-394000 --no-kubernetes --driver=docker 
E0222 21:05:59.487204    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-394000 --no-kubernetes --driver=docker : (7.489520546s)
--- PASS: TestNoKubernetes/serial/Start (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-394000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-394000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (381.065883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (15.065960425s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-394000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-394000: (1.60632274s)
--- PASS: TestNoKubernetes/serial/Stop (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-394000 --driver=docker 
E0222 21:06:27.171969    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-394000 --driver=docker : (5.256197337s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-394000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-394000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (380.546066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (49.352747292s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-310000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-310000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-qfgll" [4d04668f-0f07-486c-9cbc-e1375edc5142] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-qfgll" [4d04668f-0f07-486c-9cbc-e1375edc5142] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00749584s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-310000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
E0222 21:08:06.162761    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (57.315999024s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-v72vg" [24528a94-3b59-49dc-b476-3c32fa125bf7] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014376225s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-310000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-310000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-x47pq" [9e2e8a13-2cd4-4e5d-b481-7a16c3bc0aac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-x47pq" [9e2e8a13-2cd4-4e5d-b481-7a16c3bc0aac] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.009638531s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-310000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0222 21:09:43.044392    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m21.365387877s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (1m1.385532564s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cfrr7" [791dd385-f39a-49b9-838c-dbd95083e3cc] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016018249s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-310000 "pgrep -a kubelet"
E0222 21:10:59.481375    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-310000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-jz6bt" [88dc6c30-87ba-4369-af7b-cc3d70e6b13a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-jz6bt" [88dc6c30-87ba-4369-af7b-cc3d70e6b13a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.009191997s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-310000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (52.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (52.265569801s)
--- PASS: TestNetworkPlugins/group/false/Start (52.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-310000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-310000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-qc9br" [4be50fff-4011-4bc5-b599-160fada6bbe7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-qc9br" [4be50fff-4011-4bc5-b599-160fada6bbe7] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.007300385s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-310000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
E0222 21:12:19.941865    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:12:20.023974    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:12:20.185582    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:12:20.505837    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:12:21.147312    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:12:22.427466    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:12:24.988589    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:12:30.108706    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (51.218120607s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-310000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-310000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-9jl5g" [b75ea542-61d4-4125-825e-6b8c1e40c5d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-9jl5g" [b75ea542-61d4-4125-825e-6b8c1e40c5d6] Running
E0222 21:12:40.349518    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.00828753s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-310000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (58.794571462s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-310000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-310000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4n8dq" [04885f9a-b566-4680-a1d5-273c4860d11f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-4n8dq" [04885f9a-b566-4680-a1d5-273c4860d11f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.008466121s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-310000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (57.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0222 21:13:52.033127    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:52.038259    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:52.048373    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:52.068549    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:52.108792    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:52.189104    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:52.349691    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:52.670763    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:53.312505    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:54.594220    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:13:57.154306    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:14:02.274662    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (57.353872592s)
--- PASS: TestNetworkPlugins/group/bridge/Start (57.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cgn5c" [69d2cfdc-0e40-4039-ae1d-1b56758355e3] Running
E0222 21:14:12.514691    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.013672193s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-310000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-310000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-mbsll" [47205aec-c754-403b-8faa-71e6ae1443aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-mbsll" [47205aec-c754-403b-8faa-71e6ae1443aa] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.008702325s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-310000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-310000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-310000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7ctgg" [7555775e-9751-4600-ad35-510246cc1d50] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-7ctgg" [7555775e-9751-4600-ad35-510246cc1d50] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.008007449s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-310000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (52.253281497s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-310000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-310000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-310000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-9wm4q" [00c64cfb-291e-41a0-bb85-98dff990ead9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-9wm4q" [00c64cfb-291e-41a0-bb85-98dff990ead9] Running
E0222 21:15:54.282691    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:15:54.289168    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:15:54.301338    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:15:54.322214    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:15:54.363731    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:15:54.444316    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:15:54.604881    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:15:54.925216    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.009394425s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-310000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-310000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0222 21:16:35.249686    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:16:35.875053    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:16:42.495270    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:42.500418    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:42.510600    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:42.530826    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:42.570926    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:42.651376    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:42.811575    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:43.133384    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:43.774955    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:45.055448    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:47.617534    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:16:52.738073    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:17:02.979630    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:17:16.211387    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (59.155946492s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-081000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f189239c-121e-4dce-bb2b-c6960fcd3031] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0222 21:17:19.859970    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f189239c-121e-4dce-bb2b-c6960fcd3031] Running
E0222 21:17:22.574010    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:17:23.460956    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.016038341s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-081000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-081000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-081000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-081000 --alsologtostderr -v=3
E0222 21:17:34.133786    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:34.140203    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:34.152377    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:34.174546    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:34.216404    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:34.297484    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:34.459701    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:34.781269    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:35.421973    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:36.703044    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-081000 --alsologtostderr -v=3: (10.946740376s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000: exit status 7 (105.444239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-081000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (305.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0222 21:17:39.263375    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:44.383579    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:17:47.547989    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:17:54.623663    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:18:04.420554    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:18:11.849401    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:11.854665    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:11.865063    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:11.887249    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:11.929433    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:12.011572    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:12.173678    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:12.493907    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:13.134584    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:14.415664    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:15.104098    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:18:16.975963    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:22.096258    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:32.336672    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:38.131270    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:18:52.029928    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:18:52.816553    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:18:56.064096    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:19:08.120265    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:08.126724    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:08.138863    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:08.160768    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:08.202935    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:08.283426    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:08.445612    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:08.766467    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:09.406902    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:10.688525    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:13.248724    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:18.369121    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:19.713046    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:19:26.138735    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 21:19:26.341742    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:19:28.611032    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:19:33.776158    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-081000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (5m4.757864176s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-081000 -n no-preload-081000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (305.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-865000 --alsologtostderr -v=3
E0222 21:21:25.052134    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-865000 --alsologtostderr -v=3: (1.566322047s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-865000 -n old-k8s-version-865000: exit status 7 (104.538902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-865000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-jjg64" [8bf750ce-1a6d-4a4e-96a7-ea03314da240] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-jjg64" [8bf750ce-1a6d-4a4e-96a7-ea03314da240] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.065230648s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-jjg64" [8bf750ce-1a6d-4a4e-96a7-ea03314da240] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006230226s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-081000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-081000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-081000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-081000 -n no-preload-081000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-081000 -n no-preload-081000: exit status 2 (424.976483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-081000 -n no-preload-081000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-081000 -n no-preload-081000: exit status 2 (424.062384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-081000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-081000 -n no-preload-081000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-081000 -n no-preload-081000
E0222 21:23:01.823854    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-677000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0222 21:23:11.844872    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:23:27.934741    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:23:39.534496    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:23:52.026189    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-677000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (59.553057381s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-677000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cee4fde5-0460-4a54-bdce-3fb0d5654a6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0222 21:24:08.115604    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cee4fde5-0460-4a54-bdce-3fb0d5654a6f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.014972668s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-677000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-677000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-677000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-677000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-677000 --alsologtostderr -v=3: (11.070910138s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-677000 -n embed-certs-677000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-677000 -n embed-certs-677000: exit status 7 (112.304361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-677000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (551.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-677000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0222 21:24:35.811568    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
E0222 21:24:43.084735    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 21:24:45.136577    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:24:46.200615    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 21:25:03.153779    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 21:25:12.828351    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
E0222 21:25:44.081148    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:25:54.274836    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/calico-310000/client.crt: no such file or directory
E0222 21:25:59.523119    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
E0222 21:26:11.773520    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kubenet-310000/client.crt: no such file or directory
E0222 21:26:42.487960    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/custom-flannel-310000/client.crt: no such file or directory
E0222 21:27:17.665392    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:17.670578    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:17.681913    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:17.702880    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:17.743211    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:17.825385    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:17.985562    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:18.306174    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:18.946777    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:19.851858    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:27:20.227797    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:22.788012    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:27.908104    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:34.126738    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:27:38.148616    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:27:58.628536    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:28:11.840976    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/enable-default-cni-310000/client.crt: no such file or directory
E0222 21:28:39.588777    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/no-preload-081000/client.crt: no such file or directory
E0222 21:28:42.899665    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/auto-310000/client.crt: no such file or directory
E0222 21:28:52.023280    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
E0222 21:29:08.111936    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/flannel-310000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-677000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (9m10.606262346s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-677000 -n embed-certs-677000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (551.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-jmsp5" [a74cb13e-fa61-47ba-9907-e99057532f0c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012863499s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-jmsp5" [a74cb13e-fa61-47ba-9907-e99057532f0c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009391457s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-677000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-677000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-677000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-677000 -n embed-certs-677000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-677000 -n embed-certs-677000: exit status 2 (421.889284ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-677000 -n embed-certs-677000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-677000 -n embed-certs-677000: exit status 2 (421.632252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-677000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-677000 -n embed-certs-677000
E0222 21:33:52.017720    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/kindnet-310000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-677000 -n embed-certs-677000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-783000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0222 21:33:57.175923    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/false-310000/client.crt: no such file or directory
E0222 21:34:02.562892    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/skaffold-593000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-783000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (50.638710898s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-783000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [509f651b-f784-487d-b3dd-7eeba5729450] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [509f651b-f784-487d-b3dd-7eeba5729450] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.014214605s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-783000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-783000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-783000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-783000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-783000 --alsologtostderr -v=3: (10.937751351s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-783000 -n default-k8s-diff-port-783000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-783000 -n default-k8s-diff-port-783000: exit status 7 (108.238508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-783000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (553.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-783000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-783000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (9m13.168792332s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-783000 -n default-k8s-diff-port-783000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (553.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lt6rw" [95fd1cb4-dc4a-468b-9f32-c8b7130bffb5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014687554s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lt6rw" [95fd1cb4-dc4a-468b-9f32-c8b7130bffb5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007640848s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-783000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-783000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-783000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-783000 -n default-k8s-diff-port-783000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-783000 -n default-k8s-diff-port-783000: exit status 2 (426.042802ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-783000 -n default-k8s-diff-port-783000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-783000 -n default-k8s-diff-port-783000: exit status 2 (421.434021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-783000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-783000 -n default-k8s-diff-port-783000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-783000 -n default-k8s-diff-port-783000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-150000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
E0222 21:44:43.230216    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 21:44:45.282540    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/bridge-310000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-150000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (43.40890309s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-150000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-150000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-150000 --alsologtostderr -v=3: (10.954685672s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-150000 -n newest-cni-150000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-150000 -n newest-cni-150000: exit status 7 (104.401141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-150000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-150000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-150000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (24.805931013s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-150000 -n newest-cni-150000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-150000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-150000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-150000 -n newest-cni-150000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-150000 -n newest-cni-150000: exit status 2 (421.09663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-150000 -n newest-cni-150000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-150000 -n newest-cni-150000: exit status 2 (423.625152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-150000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-150000 -n newest-cni-150000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-150000 -n newest-cni-150000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                    

Test skip (18/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 14.019682ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5kmkn" [cc065141-b3a6-4f4c-b437-57373aa070e2] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008741126s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2ql5x" [f9ca393d-b69b-483b-8b73-130b353f7564] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009400744s
addons_test.go:305: (dbg) Run:  kubectl --context addons-566000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-566000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-566000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.556079762s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.68s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (15.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-566000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-566000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-566000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [66b35af0-36b1-4e57-8eaf-4d693b7d4214] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [66b35af0-36b1-4e57-8eaf-4d693b7d4214] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.009174962s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-566000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (15.32s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-106000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-106000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-9chj5" [c6dea417-f526-4ea1-b19b-c53fe9f7a778] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0222 20:30:03.432183    3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-5cf7cc858f-9chj5" [c6dea417-f526-4ea1-b19b-c53fe9f7a778] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.017304582s
functional_test.go:1614: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-310000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-310000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-310000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-310000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-310000"

                                                
                                                
----------------------- debugLogs end: cilium-310000 [took: 5.455729483s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-310000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-310000
--- SKIP: TestNetworkPlugins/group/cilium (5.96s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-986000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-986000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard