Test Report: Docker_macOS 15909

                    
                      c3ced9e44b664dea818a5c37f69b411b40c816d1:2023-02-23:28040
                    
                

Test fail (16/306)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (263.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-691000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0223 16:53:55.803766   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:54:23.496359   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:54:44.875000   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:44.881462   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:44.893596   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:44.914410   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:44.954724   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:45.035008   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:45.195664   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:45.515831   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:46.156235   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:47.438573   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:50.000168   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:55.120559   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:55:05.361515   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:55:25.842623   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-691000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m23.893518114s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-691000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-691000 in cluster ingress-addon-legacy-691000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 16:51:42.330861   27829 out.go:296] Setting OutFile to fd 1 ...
	I0223 16:51:42.331022   27829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:51:42.331032   27829 out.go:309] Setting ErrFile to fd 2...
	I0223 16:51:42.331036   27829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:51:42.331149   27829 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 16:51:42.332560   27829 out.go:303] Setting JSON to false
	I0223 16:51:42.351039   27829 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6677,"bootTime":1677193225,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 16:51:42.351121   27829 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 16:51:42.373164   27829 out.go:177] * [ingress-addon-legacy-691000] minikube v1.29.0 on Darwin 13.2
	I0223 16:51:42.394401   27829 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 16:51:42.394367   27829 notify.go:220] Checking for updates...
	I0223 16:51:42.416555   27829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 16:51:42.438217   27829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 16:51:42.460208   27829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 16:51:42.481218   27829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 16:51:42.502257   27829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 16:51:42.523570   27829 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 16:51:42.583780   27829 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 16:51:42.583894   27829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 16:51:42.724375   27829 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 00:51:42.632900741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 16:51:42.746479   27829 out.go:177] * Using the docker driver based on user configuration
	I0223 16:51:42.768090   27829 start.go:296] selected driver: docker
	I0223 16:51:42.768117   27829 start.go:857] validating driver "docker" against <nil>
	I0223 16:51:42.768135   27829 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 16:51:42.772037   27829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 16:51:42.914635   27829 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 00:51:42.821663406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 16:51:42.914795   27829 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 16:51:42.914977   27829 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 16:51:42.936574   27829 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 16:51:42.958718   27829 cni.go:84] Creating CNI manager for ""
	I0223 16:51:42.958758   27829 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 16:51:42.958772   27829 start_flags.go:319] config:
	{Name:ingress-addon-legacy-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-691000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 16:51:43.001559   27829 out.go:177] * Starting control plane node ingress-addon-legacy-691000 in cluster ingress-addon-legacy-691000
	I0223 16:51:43.022515   27829 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 16:51:43.044385   27829 out.go:177] * Pulling base image ...
	I0223 16:51:43.086571   27829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 16:51:43.086606   27829 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 16:51:43.146922   27829 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 16:51:43.146945   27829 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 16:51:43.193668   27829 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0223 16:51:43.193699   27829 cache.go:57] Caching tarball of preloaded images
	I0223 16:51:43.194039   27829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 16:51:43.215875   27829 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0223 16:51:43.257763   27829 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 16:51:43.458479   27829 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0223 16:51:56.118847   27829 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 16:51:56.119090   27829 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 16:51:56.790026   27829 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0223 16:51:56.790255   27829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/config.json ...
	I0223 16:51:56.790284   27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/config.json: {Name:mka594ad54848610af6d11e54032c8be3efc53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 16:51:56.790580   27829 cache.go:193] Successfully downloaded all kic artifacts
	I0223 16:51:56.790608   27829 start.go:364] acquiring machines lock for ingress-addon-legacy-691000: {Name:mk8657a1d89f12d943cb8e554a12c5028bc1eb5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 16:51:56.790701   27829 start.go:368] acquired machines lock for "ingress-addon-legacy-691000" in 85.148µs
	I0223 16:51:56.790728   27829 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-691000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 16:51:56.790771   27829 start.go:125] createHost starting for "" (driver="docker")
	I0223 16:51:56.842102   27829 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0223 16:51:56.842417   27829 start.go:159] libmachine.API.Create for "ingress-addon-legacy-691000" (driver="docker")
	I0223 16:51:56.842501   27829 client.go:168] LocalClient.Create starting
	I0223 16:51:56.842718   27829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem
	I0223 16:51:56.842817   27829 main.go:141] libmachine: Decoding PEM data...
	I0223 16:51:56.842848   27829 main.go:141] libmachine: Parsing certificate...
	I0223 16:51:56.842966   27829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem
	I0223 16:51:56.843038   27829 main.go:141] libmachine: Decoding PEM data...
	I0223 16:51:56.843055   27829 main.go:141] libmachine: Parsing certificate...
	I0223 16:51:56.843828   27829 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-691000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 16:51:56.900201   27829 cli_runner.go:211] docker network inspect ingress-addon-legacy-691000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 16:51:56.900314   27829 network_create.go:281] running [docker network inspect ingress-addon-legacy-691000] to gather additional debugging logs...
	I0223 16:51:56.900330   27829 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-691000
	W0223 16:51:56.953733   27829 cli_runner.go:211] docker network inspect ingress-addon-legacy-691000 returned with exit code 1
	I0223 16:51:56.953759   27829 network_create.go:284] error running [docker network inspect ingress-addon-legacy-691000]: docker network inspect ingress-addon-legacy-691000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-691000
	I0223 16:51:56.953769   27829 network_create.go:286] output of [docker network inspect ingress-addon-legacy-691000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-691000
	
	** /stderr **
	I0223 16:51:56.953869   27829 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 16:51:57.008174   27829 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001266d50}
	I0223 16:51:57.008205   27829 network_create.go:123] attempt to create docker network ingress-addon-legacy-691000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0223 16:51:57.008274   27829 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-691000 ingress-addon-legacy-691000
	I0223 16:51:57.094976   27829 network_create.go:107] docker network ingress-addon-legacy-691000 192.168.49.0/24 created
	I0223 16:51:57.095022   27829 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-691000" container
	I0223 16:51:57.095156   27829 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 16:51:57.149625   27829 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-691000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-691000 --label created_by.minikube.sigs.k8s.io=true
	I0223 16:51:57.204400   27829 oci.go:103] Successfully created a docker volume ingress-addon-legacy-691000
	I0223 16:51:57.204533   27829 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-691000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-691000 --entrypoint /usr/bin/test -v ingress-addon-legacy-691000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 16:51:57.659487   27829 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-691000
	I0223 16:51:57.659554   27829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 16:51:57.659568   27829 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 16:51:57.659694   27829 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-691000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 16:52:04.035759   27829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-691000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.375793536s)
	I0223 16:52:04.035779   27829 kic.go:199] duration metric: took 6.376058 seconds to extract preloaded images to volume
	I0223 16:52:04.035901   27829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 16:52:04.176831   27829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-691000 --name ingress-addon-legacy-691000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-691000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-691000 --network ingress-addon-legacy-691000 --ip 192.168.49.2 --volume ingress-addon-legacy-691000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 16:52:04.526734   27829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Running}}
	I0223 16:52:04.639526   27829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Status}}
	I0223 16:52:04.701405   27829 cli_runner.go:164] Run: docker exec ingress-addon-legacy-691000 stat /var/lib/dpkg/alternatives/iptables
	I0223 16:52:04.809122   27829 oci.go:144] the created container "ingress-addon-legacy-691000" has a running status.
	I0223 16:52:04.809160   27829 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa...
	I0223 16:52:04.893385   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 16:52:04.893456   27829 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 16:52:04.997763   27829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Status}}
	I0223 16:52:05.061535   27829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 16:52:05.061554   27829 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-691000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 16:52:05.161069   27829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Status}}
	I0223 16:52:05.218395   27829 machine.go:88] provisioning docker machine ...
	I0223 16:52:05.218452   27829 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-691000"
	I0223 16:52:05.218562   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:05.277315   27829 main.go:141] libmachine: Using SSH client type: native
	I0223 16:52:05.277715   27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57528 <nil> <nil>}
	I0223 16:52:05.277731   27829 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-691000 && echo "ingress-addon-legacy-691000" | sudo tee /etc/hostname
	I0223 16:52:05.423375   27829 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-691000
	
	I0223 16:52:05.423486   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:05.479830   27829 main.go:141] libmachine: Using SSH client type: native
	I0223 16:52:05.480179   27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57528 <nil> <nil>}
	I0223 16:52:05.480193   27829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-691000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-691000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-691000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 16:52:05.616345   27829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 16:52:05.616368   27829 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 16:52:05.616387   27829 ubuntu.go:177] setting up certificates
	I0223 16:52:05.616392   27829 provision.go:83] configureAuth start
	I0223 16:52:05.616466   27829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-691000
	I0223 16:52:05.672597   27829 provision.go:138] copyHostCerts
	I0223 16:52:05.672641   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 16:52:05.672697   27829 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 16:52:05.672706   27829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 16:52:05.672811   27829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 16:52:05.672979   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 16:52:05.673015   27829 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 16:52:05.673020   27829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 16:52:05.673084   27829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 16:52:05.673211   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 16:52:05.673256   27829 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 16:52:05.673262   27829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 16:52:05.673323   27829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 16:52:05.673457   27829 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-691000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-691000]
	I0223 16:52:05.815029   27829 provision.go:172] copyRemoteCerts
	I0223 16:52:05.815093   27829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 16:52:05.815149   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:05.871932   27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
	I0223 16:52:05.968616   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 16:52:05.968710   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 16:52:05.986361   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 16:52:05.986443   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0223 16:52:06.004761   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 16:52:06.004838   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 16:52:06.022014   27829 provision.go:86] duration metric: configureAuth took 405.596663ms
	I0223 16:52:06.022034   27829 ubuntu.go:193] setting minikube options for container-runtime
	I0223 16:52:06.022193   27829 config.go:182] Loaded profile config "ingress-addon-legacy-691000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 16:52:06.022263   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:06.080229   27829 main.go:141] libmachine: Using SSH client type: native
	I0223 16:52:06.080591   27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57528 <nil> <nil>}
	I0223 16:52:06.080609   27829 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 16:52:06.214396   27829 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 16:52:06.214416   27829 ubuntu.go:71] root file system type: overlay
	I0223 16:52:06.214509   27829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 16:52:06.214593   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:06.271367   27829 main.go:141] libmachine: Using SSH client type: native
	I0223 16:52:06.271726   27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57528 <nil> <nil>}
	I0223 16:52:06.271774   27829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 16:52:06.415765   27829 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 16:52:06.415859   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:06.473267   27829 main.go:141] libmachine: Using SSH client type: native
	I0223 16:52:06.473618   27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57528 <nil> <nil>}
	I0223 16:52:06.473632   27829 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 16:52:07.121227   27829 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 00:52:06.412418193 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 16:52:07.121272   27829 machine.go:91] provisioned docker machine in 1.902775682s
	I0223 16:52:07.121279   27829 client.go:171] LocalClient.Create took 10.278521565s
	I0223 16:52:07.121303   27829 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-691000" took 10.278641491s
	I0223 16:52:07.121315   27829 start.go:300] post-start starting for "ingress-addon-legacy-691000" (driver="docker")
	I0223 16:52:07.121320   27829 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 16:52:07.121437   27829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 16:52:07.121531   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:07.183236   27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
	I0223 16:52:07.279272   27829 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 16:52:07.282912   27829 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 16:52:07.282932   27829 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 16:52:07.282939   27829 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 16:52:07.282945   27829 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 16:52:07.282957   27829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 16:52:07.283067   27829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 16:52:07.283240   27829 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 16:52:07.283248   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /etc/ssl/certs/248852.pem
	I0223 16:52:07.283439   27829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 16:52:07.290917   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 16:52:07.309069   27829 start.go:303] post-start completed in 187.731223ms
	I0223 16:52:07.309604   27829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-691000
	I0223 16:52:07.369800   27829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/config.json ...
	I0223 16:52:07.370294   27829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 16:52:07.370370   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:07.427650   27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
	I0223 16:52:07.523046   27829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 16:52:07.527917   27829 start.go:128] duration metric: createHost completed in 10.736876549s
	I0223 16:52:07.527938   27829 start.go:83] releasing machines lock for "ingress-addon-legacy-691000", held for 10.736967972s
	I0223 16:52:07.528034   27829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-691000
	I0223 16:52:07.585355   27829 ssh_runner.go:195] Run: cat /version.json
	I0223 16:52:07.585388   27829 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 16:52:07.585426   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:07.585475   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:07.648661   27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
	I0223 16:52:07.648898   27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
	I0223 16:52:07.740553   27829 ssh_runner.go:195] Run: systemctl --version
	I0223 16:52:08.002210   27829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 16:52:08.007538   27829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 16:52:08.027598   27829 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 16:52:08.027681   27829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 16:52:08.041564   27829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 16:52:08.049470   27829 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 16:52:08.049485   27829 start.go:485] detecting cgroup driver to use...
	I0223 16:52:08.049496   27829 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 16:52:08.049574   27829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 16:52:08.062788   27829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0223 16:52:08.071363   27829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 16:52:08.079741   27829 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 16:52:08.079798   27829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 16:52:08.088356   27829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 16:52:08.097037   27829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 16:52:08.105846   27829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 16:52:08.114332   27829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 16:52:08.122354   27829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 16:52:08.130786   27829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 16:52:08.138232   27829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 16:52:08.145921   27829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 16:52:08.215543   27829 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 16:52:08.285440   27829 start.go:485] detecting cgroup driver to use...
	I0223 16:52:08.285461   27829 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 16:52:08.285538   27829 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 16:52:08.299424   27829 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 16:52:08.299491   27829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 16:52:08.310802   27829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 16:52:08.324518   27829 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 16:52:08.388955   27829 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 16:52:08.489621   27829 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 16:52:08.489640   27829 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 16:52:08.503856   27829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 16:52:08.598167   27829 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 16:52:08.841237   27829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 16:52:08.868118   27829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 16:52:08.916583   27829 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	I0223 16:52:08.916744   27829 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-691000 dig +short host.docker.internal
	I0223 16:52:09.032899   27829 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 16:52:09.033014   27829 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 16:52:09.037842   27829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 16:52:09.048692   27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:52:09.107753   27829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 16:52:09.107840   27829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 16:52:09.128434   27829 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0223 16:52:09.128452   27829 docker.go:560] Images already preloaded, skipping extraction
	I0223 16:52:09.128540   27829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 16:52:09.150034   27829 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0223 16:52:09.150050   27829 cache_images.go:84] Images are preloaded, skipping loading
	I0223 16:52:09.150151   27829 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 16:52:09.175862   27829 cni.go:84] Creating CNI manager for ""
	I0223 16:52:09.175880   27829 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 16:52:09.175895   27829 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 16:52:09.175909   27829 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-691000 NodeName:ingress-addon-legacy-691000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 16:52:09.176023   27829 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-691000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 16:52:09.176107   27829 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-691000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-691000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 16:52:09.176167   27829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0223 16:52:09.184267   27829 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 16:52:09.184325   27829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 16:52:09.192988   27829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0223 16:52:09.208585   27829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0223 16:52:09.222134   27829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0223 16:52:09.235490   27829 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0223 16:52:09.240304   27829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 16:52:09.250638   27829 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000 for IP: 192.168.49.2
	I0223 16:52:09.250660   27829 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 16:52:09.250840   27829 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 16:52:09.250899   27829 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 16:52:09.250946   27829 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.key
	I0223 16:52:09.250958   27829 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.crt with IP's: []
	I0223 16:52:09.352824   27829 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.crt ...
	I0223 16:52:09.352839   27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.crt: {Name:mkf4a38d775d0b7de4649fb0074f3eec41a516ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 16:52:09.353149   27829 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.key ...
	I0223 16:52:09.353158   27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.key: {Name:mkf91f763e830a3815d402105422bcece62e2244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 16:52:09.353363   27829 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key.dd3b5fb2
	I0223 16:52:09.353380   27829 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 16:52:09.432508   27829 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt.dd3b5fb2 ...
	I0223 16:52:09.432521   27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt.dd3b5fb2: {Name:mk614948d8a87630582cfd9d4f25e3c57c069cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 16:52:09.432824   27829 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key.dd3b5fb2 ...
	I0223 16:52:09.432832   27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key.dd3b5fb2: {Name:mk8243386d48e231dfda2178217165573ac326e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 16:52:09.433023   27829 certs.go:333] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt
	I0223 16:52:09.433196   27829 certs.go:337] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key
	I0223 16:52:09.433365   27829 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key
	I0223 16:52:09.433380   27829 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt with IP's: []
	I0223 16:52:09.549410   27829 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt ...
	I0223 16:52:09.549424   27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt: {Name:mk3248da3723150da08b3caa3ba0766e319a02a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 16:52:09.567970   27829 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key ...
	I0223 16:52:09.568002   27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key: {Name:mkc36b64e51acc0589a7c6bce01544c39b448cfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 16:52:09.590352   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 16:52:09.590438   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 16:52:09.590482   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 16:52:09.590522   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 16:52:09.590561   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 16:52:09.590598   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 16:52:09.590634   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 16:52:09.590668   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 16:52:09.590835   27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 16:52:09.590935   27829 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 16:52:09.590966   27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 16:52:09.591042   27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 16:52:09.591126   27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 16:52:09.591174   27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 16:52:09.591269   27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 16:52:09.591312   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 16:52:09.591339   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem -> /usr/share/ca-certificates/24885.pem
	I0223 16:52:09.591364   27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /usr/share/ca-certificates/248852.pem
	I0223 16:52:09.592023   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 16:52:09.610797   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 16:52:09.627908   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 16:52:09.645477   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 16:52:09.663562   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 16:52:09.680796   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 16:52:09.699445   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 16:52:09.718105   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 16:52:09.735290   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 16:52:09.753419   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 16:52:09.770680   27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 16:52:09.788221   27829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 16:52:09.802626   27829 ssh_runner.go:195] Run: openssl version
	I0223 16:52:09.808270   27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 16:52:09.816379   27829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 16:52:09.820269   27829 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 16:52:09.820325   27829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 16:52:09.825919   27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 16:52:09.834258   27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 16:52:09.842908   27829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 16:52:09.847687   27829 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 16:52:09.847753   27829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 16:52:09.853444   27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 16:52:09.861603   27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 16:52:09.869839   27829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 16:52:09.873871   27829 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 16:52:09.873921   27829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 16:52:09.879227   27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 16:52:09.887494   27829 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-691000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 16:52:09.887606   27829 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 16:52:09.907703   27829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 16:52:09.915720   27829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 16:52:09.923117   27829 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 16:52:09.923178   27829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 16:52:09.930732   27829 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 16:52:09.930767   27829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 16:52:09.981658   27829 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0223 16:52:09.981712   27829 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 16:52:10.149872   27829 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 16:52:10.149957   27829 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 16:52:10.150029   27829 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 16:52:10.306960   27829 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 16:52:10.307460   27829 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 16:52:10.307520   27829 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 16:52:10.384782   27829 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 16:52:10.427168   27829 out.go:204]   - Generating certificates and keys ...
	I0223 16:52:10.427258   27829 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 16:52:10.427320   27829 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 16:52:10.459532   27829 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 16:52:10.604137   27829 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 16:52:10.680105   27829 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 16:52:10.760856   27829 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 16:52:10.898702   27829 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 16:52:10.898826   27829 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0223 16:52:11.097799   27829 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 16:52:11.097967   27829 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0223 16:52:11.327065   27829 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 16:52:11.434865   27829 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 16:52:11.723209   27829 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 16:52:11.723282   27829 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 16:52:11.882310   27829 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 16:52:11.983532   27829 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 16:52:12.206122   27829 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 16:52:12.340997   27829 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 16:52:12.359302   27829 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 16:52:12.379710   27829 out.go:204]   - Booting up control plane ...
	I0223 16:52:12.379931   27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 16:52:12.380129   27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 16:52:12.380317   27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 16:52:12.380457   27829 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 16:52:12.380718   27829 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 16:52:52.351203   27829 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 16:52:52.357264   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:52:52.357466   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:52:57.353104   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:52:57.354661   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:53:07.355162   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:53:07.355370   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:53:27.357356   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:53:27.357593   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:54:07.360166   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:54:07.360387   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:54:07.360421   27829 kubeadm.go:322] 
	I0223 16:54:07.360484   27829 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0223 16:54:07.360542   27829 kubeadm.go:322] 		timed out waiting for the condition
	I0223 16:54:07.360559   27829 kubeadm.go:322] 
	I0223 16:54:07.360600   27829 kubeadm.go:322] 	This error is likely caused by:
	I0223 16:54:07.360637   27829 kubeadm.go:322] 		- The kubelet is not running
	I0223 16:54:07.360766   27829 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 16:54:07.360785   27829 kubeadm.go:322] 
	I0223 16:54:07.360916   27829 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 16:54:07.360963   27829 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0223 16:54:07.361000   27829 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0223 16:54:07.361005   27829 kubeadm.go:322] 
	I0223 16:54:07.361184   27829 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 16:54:07.361279   27829 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0223 16:54:07.361297   27829 kubeadm.go:322] 
	I0223 16:54:07.361401   27829 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0223 16:54:07.361461   27829 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0223 16:54:07.361594   27829 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0223 16:54:07.361633   27829 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0223 16:54:07.361651   27829 kubeadm.go:322] 
	I0223 16:54:07.365064   27829 kubeadm.go:322] W0224 00:52:09.980624    1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0223 16:54:07.365222   27829 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 16:54:07.365287   27829 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 16:54:07.365400   27829 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0223 16:54:07.365495   27829 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 16:54:07.365607   27829 kubeadm.go:322] W0224 00:52:12.345572    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 16:54:07.365748   27829 kubeadm.go:322] W0224 00:52:12.346273    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 16:54:07.365826   27829 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 16:54:07.365899   27829 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 16:54:07.366097   27829 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 00:52:09.980624    1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 00:52:12.345572    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 00:52:12.346273    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 00:52:09.980624    1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 00:52:12.345572    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 00:52:12.346273    1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 16:54:07.366130   27829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 16:54:07.777324   27829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 16:54:07.786991   27829 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 16:54:07.787048   27829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 16:54:07.794471   27829 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 16:54:07.794492   27829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 16:54:07.841229   27829 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0223 16:54:07.841290   27829 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 16:54:08.000421   27829 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 16:54:08.000512   27829 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 16:54:08.000600   27829 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 16:54:08.151909   27829 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 16:54:08.152579   27829 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 16:54:08.152620   27829 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 16:54:08.226589   27829 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 16:54:08.248030   27829 out.go:204]   - Generating certificates and keys ...
	I0223 16:54:08.248152   27829 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 16:54:08.248220   27829 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 16:54:08.248286   27829 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 16:54:08.248341   27829 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 16:54:08.248395   27829 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 16:54:08.248453   27829 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 16:54:08.248535   27829 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 16:54:08.248588   27829 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 16:54:08.248653   27829 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 16:54:08.248728   27829 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 16:54:08.248759   27829 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 16:54:08.248844   27829 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 16:54:08.289940   27829 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 16:54:08.415232   27829 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 16:54:08.519711   27829 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 16:54:08.634303   27829 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 16:54:08.634640   27829 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 16:54:08.656511   27829 out.go:204]   - Booting up control plane ...
	I0223 16:54:08.656815   27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 16:54:08.656974   27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 16:54:08.657086   27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 16:54:08.657227   27829 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 16:54:08.657499   27829 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 16:54:48.644721   27829 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 16:54:48.645420   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:54:48.645641   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:54:53.645656   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:54:53.645825   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:55:03.648224   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:55:03.648447   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:55:23.649243   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:55:23.649534   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:56:03.652480   27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 16:56:03.652701   27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 16:56:03.652718   27829 kubeadm.go:322] 
	I0223 16:56:03.652762   27829 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0223 16:56:03.652806   27829 kubeadm.go:322] 		timed out waiting for the condition
	I0223 16:56:03.652813   27829 kubeadm.go:322] 
	I0223 16:56:03.652850   27829 kubeadm.go:322] 	This error is likely caused by:
	I0223 16:56:03.652885   27829 kubeadm.go:322] 		- The kubelet is not running
	I0223 16:56:03.652989   27829 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 16:56:03.652996   27829 kubeadm.go:322] 
	I0223 16:56:03.653132   27829 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 16:56:03.653193   27829 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0223 16:56:03.653236   27829 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0223 16:56:03.653244   27829 kubeadm.go:322] 
	I0223 16:56:03.653351   27829 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 16:56:03.653443   27829 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0223 16:56:03.653450   27829 kubeadm.go:322] 
	I0223 16:56:03.653558   27829 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0223 16:56:03.653617   27829 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0223 16:56:03.653708   27829 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0223 16:56:03.653740   27829 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0223 16:56:03.653749   27829 kubeadm.go:322] 
	I0223 16:56:03.655975   27829 kubeadm.go:322] W0224 00:54:07.840130    3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0223 16:56:03.656132   27829 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 16:56:03.656222   27829 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 16:56:03.656322   27829 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0223 16:56:03.656396   27829 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 16:56:03.656503   27829 kubeadm.go:322] W0224 00:54:08.637741    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 16:56:03.656595   27829 kubeadm.go:322] W0224 00:54:08.638440    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 16:56:03.656665   27829 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 16:56:03.656732   27829 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 16:56:03.656765   27829 kubeadm.go:403] StartCluster complete in 3m53.763660891s
	I0223 16:56:03.656854   27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 16:56:03.677229   27829 logs.go:277] 0 containers: []
	W0223 16:56:03.677248   27829 logs.go:279] No container was found matching "kube-apiserver"
	I0223 16:56:03.677332   27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 16:56:03.696350   27829 logs.go:277] 0 containers: []
	W0223 16:56:03.696366   27829 logs.go:279] No container was found matching "etcd"
	I0223 16:56:03.696432   27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 16:56:03.716285   27829 logs.go:277] 0 containers: []
	W0223 16:56:03.716298   27829 logs.go:279] No container was found matching "coredns"
	I0223 16:56:03.716370   27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 16:56:03.735820   27829 logs.go:277] 0 containers: []
	W0223 16:56:03.735833   27829 logs.go:279] No container was found matching "kube-scheduler"
	I0223 16:56:03.735904   27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 16:56:03.754365   27829 logs.go:277] 0 containers: []
	W0223 16:56:03.754388   27829 logs.go:279] No container was found matching "kube-proxy"
	I0223 16:56:03.754462   27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 16:56:03.774370   27829 logs.go:277] 0 containers: []
	W0223 16:56:03.774384   27829 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 16:56:03.774460   27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 16:56:03.793239   27829 logs.go:277] 0 containers: []
	W0223 16:56:03.793253   27829 logs.go:279] No container was found matching "kindnet"
	I0223 16:56:03.793261   27829 logs.go:123] Gathering logs for kubelet ...
	I0223 16:56:03.793268   27829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 16:56:03.832301   27829 logs.go:123] Gathering logs for dmesg ...
	I0223 16:56:03.832315   27829 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 16:56:03.846105   27829 logs.go:123] Gathering logs for describe nodes ...
	I0223 16:56:03.846118   27829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 16:56:03.899041   27829 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 16:56:03.899052   27829 logs.go:123] Gathering logs for Docker ...
	I0223 16:56:03.899060   27829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 16:56:03.922973   27829 logs.go:123] Gathering logs for container status ...
	I0223 16:56:03.922986   27829 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 16:56:05.971858   27829 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048810956s)
	W0223 16:56:05.971982   27829 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 00:54:07.840130    3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 00:54:08.637741    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 00:54:08.638440    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 16:56:05.971999   27829 out.go:239] * 
	* 
	W0223 16:56:05.972130   27829 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 00:54:07.840130    3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 00:54:08.637741    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 00:54:08.638440    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 00:54:07.840130    3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 00:54:08.637741    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 00:54:08.638440    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 16:56:05.972143   27829 out.go:239] * 
	* 
	W0223 16:56:05.972784   27829 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 16:56:06.037647   27829 out.go:177] 
	W0223 16:56:06.101584   27829 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 00:54:07.840130    3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 00:54:08.637741    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 00:54:08.638440    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 00:54:07.840130    3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 00:54:08.637741    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 00:54:08.638440    3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 16:56:06.101754   27829 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 16:56:06.101822   27829 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 16:56:06.123458   27829 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-691000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (263.93s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (82.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-691000 addons enable ingress --alsologtostderr -v=5
E0223 16:56:06.804085   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-691000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m22.012290018s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 16:56:06.267338   28190 out.go:296] Setting OutFile to fd 1 ...
	I0223 16:56:06.267525   28190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:56:06.267530   28190 out.go:309] Setting ErrFile to fd 2...
	I0223 16:56:06.267534   28190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:56:06.267634   28190 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 16:56:06.289374   28190 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0223 16:56:06.310701   28190 config.go:182] Loaded profile config "ingress-addon-legacy-691000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 16:56:06.310723   28190 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-691000"
	I0223 16:56:06.310737   28190 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-691000"
	I0223 16:56:06.311114   28190 host.go:66] Checking if "ingress-addon-legacy-691000" exists ...
	I0223 16:56:06.311668   28190 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Status}}
	I0223 16:56:06.389873   28190 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0223 16:56:06.431612   28190 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0223 16:56:06.452792   28190 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0223 16:56:06.473687   28190 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0223 16:56:06.494767   28190 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0223 16:56:06.494788   28190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0223 16:56:06.494865   28190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:56:06.551811   28190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
	I0223 16:56:06.652777   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:06.706173   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:06.706216   28190 retry.go:31] will retry after 142.463576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:06.850917   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:06.904867   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:06.904884   28190 retry.go:31] will retry after 242.516121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:07.149516   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:07.201022   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:07.201038   28190 retry.go:31] will retry after 525.373476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:07.728791   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:07.783117   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:07.783132   28190 retry.go:31] will retry after 453.368635ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:08.237500   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:08.292855   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:08.292877   28190 retry.go:31] will retry after 1.378833509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:09.673978   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:09.728034   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:09.728049   28190 retry.go:31] will retry after 1.277209102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:11.006174   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:11.059621   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:11.059637   28190 retry.go:31] will retry after 3.858679262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:14.920657   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:14.974360   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:14.974374   28190 retry.go:31] will retry after 4.873140557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:19.848122   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:19.902873   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:19.902889   28190 retry.go:31] will retry after 3.645127017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:23.548567   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:23.602836   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:23.602862   28190 retry.go:31] will retry after 11.121336284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:34.725766   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:34.783738   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:34.783754   28190 retry.go:31] will retry after 12.691288959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:47.477606   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:56:47.532168   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:56:47.532184   28190 retry.go:31] will retry after 15.607718674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:03.142544   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:57:03.197195   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:03.197211   28190 retry.go:31] will retry after 24.87052262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:28.069722   28190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 16:57:28.123381   28190 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:28.123411   28190 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-691000"
	I0223 16:57:28.145041   28190 out.go:177] * Verifying ingress addon...
	I0223 16:57:28.167331   28190 out.go:177] 
	W0223 16:57:28.189032   28190 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-691000" does not exist: client config: context "ingress-addon-legacy-691000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-691000" does not exist: client config: context "ingress-addon-legacy-691000" does not exist]
	W0223 16:57:28.189049   28190 out.go:239] * 
	* 
	W0223 16:57:28.192734   28190 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 16:57:28.213820   28190 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-691000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-691000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6",
	        "Created": "2023-02-24T00:52:04.231139849Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 434853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T00:52:04.516505263Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/hosts",
	        "LogPath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6-json.log",
	        "Name": "/ingress-addon-legacy-691000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-691000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-691000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-691000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-691000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-691000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-691000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-691000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b64103bca75a0812dffdddf33627bec1f6002bc212d4f39659d1060d204e5c3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57531"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57532"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8b64103bca75",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-691000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e4d92376052",
	                        "ingress-addon-legacy-691000"
	                    ],
	                    "NetworkID": "21b3ec8719768f2ea0e9d513ea10ab2881a9971769cf480782e0c3e9792c9064",
	                    "EndpointID": "48c5c8d380a659ff08d0c616abe7c61ec60cb24403148d10f3e386285c34a50a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-691000 -n ingress-addon-legacy-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-691000 -n ingress-addon-legacy-691000: exit status 6 (387.908995ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 16:57:28.675303   28276 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-691000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-691000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (82.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (72.95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-691000 addons enable ingress-dns --alsologtostderr -v=5
E0223 16:57:28.728234   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-691000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m12.484158816s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 16:57:28.730650   28286 out.go:296] Setting OutFile to fd 1 ...
	I0223 16:57:28.730830   28286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:57:28.730835   28286 out.go:309] Setting ErrFile to fd 2...
	I0223 16:57:28.730839   28286 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:57:28.730949   28286 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 16:57:28.753010   28286 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0223 16:57:28.775030   28286 config.go:182] Loaded profile config "ingress-addon-legacy-691000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 16:57:28.775058   28286 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-691000"
	I0223 16:57:28.775082   28286 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-691000"
	I0223 16:57:28.775712   28286 host.go:66] Checking if "ingress-addon-legacy-691000" exists ...
	I0223 16:57:28.776697   28286 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Status}}
	I0223 16:57:28.855632   28286 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0223 16:57:28.876413   28286 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0223 16:57:28.897332   28286 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0223 16:57:28.897354   28286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0223 16:57:28.897439   28286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
	I0223 16:57:28.954185   28286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
	I0223 16:57:29.055771   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:29.105901   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:29.105946   28286 retry.go:31] will retry after 133.671366ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:29.241356   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:29.293855   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:29.293871   28286 retry.go:31] will retry after 275.082736ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:29.569535   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:29.625121   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:29.625139   28286 retry.go:31] will retry after 557.778325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:30.185261   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:30.238731   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:30.238749   28286 retry.go:31] will retry after 1.140486758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:31.379958   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:31.433476   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:31.433490   28286 retry.go:31] will retry after 1.468647501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:32.904512   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:32.958692   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:32.958708   28286 retry.go:31] will retry after 1.398777397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:34.359747   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:34.413008   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:34.413030   28286 retry.go:31] will retry after 1.675381971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:36.090030   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:36.142932   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:36.142946   28286 retry.go:31] will retry after 5.180427986s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:41.325729   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:41.378940   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:41.378956   28286 retry.go:31] will retry after 6.249142749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:47.628450   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:47.681592   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:47.681607   28286 retry.go:31] will retry after 9.083716001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:56.765711   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:57:56.818729   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:57:56.818743   28286 retry.go:31] will retry after 14.023132585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:58:10.843198   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:58:10.897263   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:58:10.897277   28286 retry.go:31] will retry after 12.594861628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:58:23.494670   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:58:23.548802   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:58:23.548817   28286 retry.go:31] will retry after 17.474028071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:58:41.024542   28286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 16:58:41.078272   28286 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 16:58:41.100273   28286 out.go:177] 
	W0223 16:58:41.122065   28286 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0223 16:58:41.122095   28286 out.go:239] * 
	* 
	W0223 16:58:41.127034   28286 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 16:58:41.148054   28286 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-691000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-691000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6",
	        "Created": "2023-02-24T00:52:04.231139849Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 434853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T00:52:04.516505263Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/hosts",
	        "LogPath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6-json.log",
	        "Name": "/ingress-addon-legacy-691000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-691000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-691000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-691000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-691000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-691000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-691000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-691000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b64103bca75a0812dffdddf33627bec1f6002bc212d4f39659d1060d204e5c3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57531"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57532"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8b64103bca75",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-691000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e4d92376052",
	                        "ingress-addon-legacy-691000"
	                    ],
	                    "NetworkID": "21b3ec8719768f2ea0e9d513ea10ab2881a9971769cf480782e0c3e9792c9064",
	                    "EndpointID": "48c5c8d380a659ff08d0c616abe7c61ec60cb24403148d10f3e386285c34a50a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-691000 -n ingress-addon-legacy-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-691000 -n ingress-addon-legacy-691000: exit status 6 (401.873444ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 16:58:41.622237   28361 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-691000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-691000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (72.95s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-691000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-691000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6",
	        "Created": "2023-02-24T00:52:04.231139849Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 434853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T00:52:04.516505263Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/hosts",
	        "LogPath": "/var/lib/docker/containers/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6/0e4d92376052577786c8bfb77bec9ae4220474c87d6c3081fa937abb3ca8dcd6-json.log",
	        "Name": "/ingress-addon-legacy-691000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-691000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-691000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c32737a31fbdea2da8ba9c40dcea7eeefd6abcb50422df31661d8fdc212633a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-691000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-691000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-691000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-691000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-691000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b64103bca75a0812dffdddf33627bec1f6002bc212d4f39659d1060d204e5c3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57531"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57532"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8b64103bca75",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-691000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0e4d92376052",
	                        "ingress-addon-legacy-691000"
	                    ],
	                    "NetworkID": "21b3ec8719768f2ea0e9d513ea10ab2881a9971769cf480782e0c3e9792c9064",
	                    "EndpointID": "48c5c8d380a659ff08d0c616abe7c61ec60cb24403148d10f3e386285c34a50a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-691000 -n ingress-addon-legacy-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-691000 -n ingress-addon-legacy-691000: exit status 6 (389.812373ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 16:58:42.074862   28373 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-691000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-691000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- rollout status deployment/busybox
E0223 17:05:18.774000   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-384000 -- rollout status deployment/busybox: (3.327395762s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:496: expected 2 Pod IPs but got 1, output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:503: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-nlclz -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-nlclz -- nslookup kubernetes.io: exit status 1 (151.772938ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-6b86dd6d48-nlclz could not resolve 'kubernetes.io': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-vb76c -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-nlclz -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-nlclz -- nslookup kubernetes.default: exit status 1 (156.489944ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:523: Pod busybox-6b86dd6d48-nlclz could not resolve 'kubernetes.default': exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-vb76c -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-nlclz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-nlclz -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (158.007424ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-6b86dd6d48-nlclz could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-vb76c -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-384000
helpers_test.go:235: (dbg) docker inspect multinode-384000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f",
	        "Created": "2023-02-24T01:04:08.197937871Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477589,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:04:08.49208617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f/hostname",
	        "HostsPath": "/var/lib/docker/containers/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f/hosts",
	        "LogPath": "/var/lib/docker/containers/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f-json.log",
	        "Name": "/multinode-384000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-384000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-384000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2ca9e8bd08e5423cbe44bf442faf8eb440251ee91cdfc988e0be7ca4657e0aae-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ca9e8bd08e5423cbe44bf442faf8eb440251ee91cdfc988e0be7ca4657e0aae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ca9e8bd08e5423cbe44bf442faf8eb440251ee91cdfc988e0be7ca4657e0aae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ca9e8bd08e5423cbe44bf442faf8eb440251ee91cdfc988e0be7ca4657e0aae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-384000",
	                "Source": "/var/lib/docker/volumes/multinode-384000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-384000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-384000",
	                "name.minikube.sigs.k8s.io": "multinode-384000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "642c5d802f1b22b0b186cad96b3c3fed8b1a2ff3eb4af30f670a680efb442204",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58127"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58128"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58131"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/642c5d802f1b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-384000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "434de70f5e1d",
	                        "multinode-384000"
	                    ],
	                    "NetworkID": "0f1b3c4ce23f6eca8d55fa599b0450c172159cdf542356f741631ebb970a9e73",
	                    "EndpointID": "941bad6466f325e3d767afe0a34b3a34138d06b2d8d09ede556fb2aa2d4fa5d1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-384000 -n multinode-384000
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-384000 logs -n 25: (2.334766173s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-596000                                  | second-596000        | jenkins | v1.29.0 | 23 Feb 23 17:02 PST | 23 Feb 23 17:03 PST |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| delete  | -p second-596000                                  | second-596000        | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| delete  | -p first-594000                                   | first-594000         | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| start   | -p mount-start-1-491000                           | mount-start-1-491000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-1-491000 ssh -- ls                    | mount-start-1-491000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-502000                           | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-502000 ssh -- ls                    | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-491000                           | mount-start-1-491000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-502000 ssh -- ls                    | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-502000                           | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| start   | -p mount-start-2-502000                           | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| ssh     | mount-start-2-502000 ssh -- ls                    | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-502000                           | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| delete  | -p mount-start-1-491000                           | mount-start-1-491000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| start   | -p multinode-384000                               | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:05 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- apply -f                   | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- rollout                    | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- get pods -o                | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- get pods -o                | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST |                     |
	|         | busybox-6b86dd6d48-nlclz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | busybox-6b86dd6d48-vb76c --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST |                     |
	|         | busybox-6b86dd6d48-nlclz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | busybox-6b86dd6d48-vb76c --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST |                     |
	|         | busybox-6b86dd6d48-nlclz -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | busybox-6b86dd6d48-vb76c -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 17:03:59
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 17:03:59.656062   30321 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:03:59.656244   30321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:03:59.656249   30321 out.go:309] Setting ErrFile to fd 2...
	I0223 17:03:59.656253   30321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:03:59.656363   30321 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:03:59.657822   30321 out.go:303] Setting JSON to false
	I0223 17:03:59.677236   30321 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7414,"bootTime":1677193225,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 17:03:59.677314   30321 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 17:03:59.699081   30321 out.go:177] * [multinode-384000] minikube v1.29.0 on Darwin 13.2
	I0223 17:03:59.720119   30321 notify.go:220] Checking for updates...
	I0223 17:03:59.742107   30321 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 17:03:59.763875   30321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:03:59.806984   30321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 17:03:59.827952   30321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 17:03:59.849179   30321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 17:03:59.892980   30321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 17:03:59.914339   30321 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 17:03:59.979719   30321 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 17:03:59.979843   30321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:04:00.126028   30321 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 01:04:00.031285978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:04:00.169633   30321 out.go:177] * Using the docker driver based on user configuration
	I0223 17:04:00.191172   30321 start.go:296] selected driver: docker
	I0223 17:04:00.191198   30321 start.go:857] validating driver "docker" against <nil>
	I0223 17:04:00.191216   30321 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 17:04:00.195103   30321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:04:00.339690   30321 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 01:04:00.245255337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:04:00.339817   30321 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 17:04:00.339997   30321 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 17:04:00.362872   30321 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 17:04:00.384274   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:04:00.384302   30321 cni.go:136] 0 nodes found, recommending kindnet
	I0223 17:04:00.384314   30321 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0223 17:04:00.384339   30321 start_flags.go:319] config:
	{Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:04:00.427856   30321 out.go:177] * Starting control plane node multinode-384000 in cluster multinode-384000
	I0223 17:04:00.449100   30321 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:04:00.469974   30321 out.go:177] * Pulling base image ...
	I0223 17:04:00.512032   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:00.512062   30321 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:04:00.512131   30321 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 17:04:00.512153   30321 cache.go:57] Caching tarball of preloaded images
	I0223 17:04:00.512364   30321 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:04:00.512382   30321 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 17:04:00.514626   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:04:00.514674   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json: {Name:mk35965080677d4155364ecaf1133902c945959b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:00.569017   30321 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:04:00.569047   30321 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:04:00.569068   30321 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:04:00.569117   30321 start.go:364] acquiring machines lock for multinode-384000: {Name:mk710a8f130795841106a8d589daddf1c49570ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:04:00.569263   30321 start.go:368] acquired machines lock for "multinode-384000" in 134.003µs
	I0223 17:04:00.569292   30321 start.go:93] Provisioning new machine with config: &{Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 17:04:00.569374   30321 start.go:125] createHost starting for "" (driver="docker")
	I0223 17:04:00.612785   30321 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 17:04:00.613202   30321 start.go:159] libmachine.API.Create for "multinode-384000" (driver="docker")
	I0223 17:04:00.613245   30321 client.go:168] LocalClient.Create starting
	I0223 17:04:00.613418   30321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem
	I0223 17:04:00.613500   30321 main.go:141] libmachine: Decoding PEM data...
	I0223 17:04:00.613531   30321 main.go:141] libmachine: Parsing certificate...
	I0223 17:04:00.613646   30321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem
	I0223 17:04:00.613708   30321 main.go:141] libmachine: Decoding PEM data...
	I0223 17:04:00.613725   30321 main.go:141] libmachine: Parsing certificate...
	I0223 17:04:00.614590   30321 cli_runner.go:164] Run: docker network inspect multinode-384000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 17:04:00.669024   30321 cli_runner.go:211] docker network inspect multinode-384000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 17:04:00.669115   30321 network_create.go:281] running [docker network inspect multinode-384000] to gather additional debugging logs...
	I0223 17:04:00.669129   30321 cli_runner.go:164] Run: docker network inspect multinode-384000
	W0223 17:04:00.724226   30321 cli_runner.go:211] docker network inspect multinode-384000 returned with exit code 1
	I0223 17:04:00.724253   30321 network_create.go:284] error running [docker network inspect multinode-384000]: docker network inspect multinode-384000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-384000
	I0223 17:04:00.724265   30321 network_create.go:286] output of [docker network inspect multinode-384000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-384000
	
	** /stderr **
	I0223 17:04:00.724343   30321 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 17:04:00.779800   30321 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 17:04:00.780129   30321 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d78b20}
	I0223 17:04:00.780143   30321 network_create.go:123] attempt to create docker network multinode-384000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 17:04:00.780222   30321 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-384000 multinode-384000
	I0223 17:04:00.868500   30321 network_create.go:107] docker network multinode-384000 192.168.58.0/24 created
	I0223 17:04:00.868530   30321 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-384000" container
	I0223 17:04:00.868639   30321 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 17:04:00.923921   30321 cli_runner.go:164] Run: docker volume create multinode-384000 --label name.minikube.sigs.k8s.io=multinode-384000 --label created_by.minikube.sigs.k8s.io=true
	I0223 17:04:00.978302   30321 oci.go:103] Successfully created a docker volume multinode-384000
	I0223 17:04:00.978432   30321 cli_runner.go:164] Run: docker run --rm --name multinode-384000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-384000 --entrypoint /usr/bin/test -v multinode-384000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 17:04:01.409854   30321 oci.go:107] Successfully prepared a docker volume multinode-384000
	I0223 17:04:01.409887   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:01.409901   30321 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 17:04:01.410012   30321 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-384000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 17:04:08.003538   30321 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-384000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.593535098s)
	I0223 17:04:08.003559   30321 kic.go:199] duration metric: took 6.593731 seconds to extract preloaded images to volume
	I0223 17:04:08.003681   30321 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 17:04:08.144725   30321 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-384000 --name multinode-384000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-384000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-384000 --network multinode-384000 --ip 192.168.58.2 --volume multinode-384000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 17:04:08.499616   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Running}}
	I0223 17:04:08.563566   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:08.631703   30321 cli_runner.go:164] Run: docker exec multinode-384000 stat /var/lib/dpkg/alternatives/iptables
	I0223 17:04:08.753389   30321 oci.go:144] the created container "multinode-384000" has a running status.
	I0223 17:04:08.753436   30321 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa...
	I0223 17:04:08.897712   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 17:04:08.897783   30321 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 17:04:09.003359   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:09.059915   30321 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 17:04:09.059934   30321 kic_runner.go:114] Args: [docker exec --privileged multinode-384000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 17:04:09.168252   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:09.225487   30321 machine.go:88] provisioning docker machine ...
	I0223 17:04:09.225538   30321 ubuntu.go:169] provisioning hostname "multinode-384000"
	I0223 17:04:09.225642   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:09.282374   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:09.282759   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:09.282774   30321 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-384000 && echo "multinode-384000" | sudo tee /etc/hostname
	I0223 17:04:09.425849   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-384000
	
	I0223 17:04:09.425930   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:09.548448   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:09.548876   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:09.548889   30321 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-384000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-384000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-384000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:04:09.684703   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:04:09.684733   30321 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:04:09.684752   30321 ubuntu.go:177] setting up certificates
	I0223 17:04:09.684759   30321 provision.go:83] configureAuth start
	I0223 17:04:09.684846   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000
	I0223 17:04:09.742376   30321 provision.go:138] copyHostCerts
	I0223 17:04:09.742423   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:04:09.742480   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:04:09.742490   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:04:09.742643   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:04:09.742822   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:04:09.742854   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:04:09.742859   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:04:09.742935   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:04:09.743059   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:04:09.743095   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:04:09.743100   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:04:09.743167   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:04:09.743309   30321 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.multinode-384000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-384000]
	I0223 17:04:09.867609   30321 provision.go:172] copyRemoteCerts
	I0223 17:04:09.867667   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:04:09.867718   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:09.925349   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:10.020512   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 17:04:10.020606   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:04:10.037904   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 17:04:10.037986   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 17:04:10.055088   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 17:04:10.055168   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 17:04:10.072578   30321 provision.go:86] duration metric: configureAuth took 387.811375ms
	I0223 17:04:10.072594   30321 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:04:10.072760   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:04:10.072828   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:10.132164   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:10.132535   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:10.132548   30321 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:04:10.268194   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:04:10.268210   30321 ubuntu.go:71] root file system type: overlay
	I0223 17:04:10.268316   30321 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:04:10.268391   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:10.325166   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:10.325518   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:10.325567   30321 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:04:10.467391   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:04:10.467471   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:10.524963   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:10.525297   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:10.525311   30321 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:04:11.140475   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:04:10.465300927 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 17:04:11.140502   30321 machine.go:91] provisioned docker machine in 1.91500784s
	I0223 17:04:11.140511   30321 client.go:171] LocalClient.Create took 10.527374153s
	I0223 17:04:11.140526   30321 start.go:167] duration metric: libmachine.API.Create for "multinode-384000" took 10.52744427s
	I0223 17:04:11.140539   30321 start.go:300] post-start starting for "multinode-384000" (driver="docker")
	I0223 17:04:11.140553   30321 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:04:11.140629   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:04:11.140685   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:11.198979   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:11.293728   30321 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:04:11.297245   30321 command_runner.go:130] > NAME="Ubuntu"
	I0223 17:04:11.297253   30321 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 17:04:11.297257   30321 command_runner.go:130] > ID=ubuntu
	I0223 17:04:11.297260   30321 command_runner.go:130] > ID_LIKE=debian
	I0223 17:04:11.297264   30321 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 17:04:11.297268   30321 command_runner.go:130] > VERSION_ID="20.04"
	I0223 17:04:11.297274   30321 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 17:04:11.297279   30321 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 17:04:11.297283   30321 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 17:04:11.297298   30321 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 17:04:11.297302   30321 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 17:04:11.297306   30321 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 17:04:11.297364   30321 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:04:11.297374   30321 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:04:11.297381   30321 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:04:11.297386   30321 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:04:11.297396   30321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:04:11.297493   30321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:04:11.297665   30321 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:04:11.297677   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /etc/ssl/certs/248852.pem
	I0223 17:04:11.297860   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:04:11.305048   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:04:11.322835   30321 start.go:303] post-start completed in 182.282994ms
	I0223 17:04:11.323356   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000
	I0223 17:04:11.382358   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:04:11.382804   30321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:04:11.382865   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:11.440328   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:11.532728   30321 command_runner.go:130] > 5%!
	(MISSING)I0223 17:04:11.532809   30321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:04:11.537424   30321 command_runner.go:130] > 93G
	I0223 17:04:11.537436   30321 start.go:128] duration metric: createHost completed in 10.968179606s
	I0223 17:04:11.537447   30321 start.go:83] releasing machines lock for "multinode-384000", held for 10.968298967s
	I0223 17:04:11.537525   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000
	I0223 17:04:11.594419   30321 ssh_runner.go:195] Run: cat /version.json
	I0223 17:04:11.594443   30321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 17:04:11.594501   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:11.594523   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:11.657132   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:11.657162   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:11.750436   30321 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0223 17:04:11.803827   30321 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 17:04:11.805802   30321 ssh_runner.go:195] Run: systemctl --version
	I0223 17:04:11.810782   30321 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0223 17:04:11.810804   30321 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0223 17:04:11.810894   30321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 17:04:11.815548   30321 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 17:04:11.815559   30321 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 17:04:11.815571   30321 command_runner.go:130] > Device: a6h/166d	Inode: 2885211     Links: 1
	I0223 17:04:11.815577   30321 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 17:04:11.815585   30321 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 17:04:11.815594   30321 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 17:04:11.815600   30321 command_runner.go:130] > Change: 2023-02-24 00:41:32.964225417 +0000
	I0223 17:04:11.815607   30321 command_runner.go:130] >  Birth: -
	I0223 17:04:11.815859   30321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 17:04:11.836126   30321 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 17:04:11.836196   30321 ssh_runner.go:195] Run: which cri-dockerd
	I0223 17:04:11.839967   30321 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 17:04:11.840207   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 17:04:11.847579   30321 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 17:04:11.860245   30321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 17:04:11.874957   30321 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 17:04:11.874986   30321 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 17:04:11.874997   30321 start.go:485] detecting cgroup driver to use...
	I0223 17:04:11.875008   30321 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:04:11.875085   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:04:11.887495   30321 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 17:04:11.887507   30321 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 17:04:11.888331   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 17:04:11.896733   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:04:11.905073   30321 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:04:11.905130   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:04:11.913621   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:04:11.922108   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:04:11.930637   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:04:11.938936   30321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:04:11.946966   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:04:11.955500   30321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:04:11.962229   30321 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 17:04:11.962816   30321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:04:11.970192   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:04:12.035519   30321 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:04:12.107992   30321 start.go:485] detecting cgroup driver to use...
	I0223 17:04:12.108010   30321 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:04:12.108082   30321 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:04:12.118212   30321 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 17:04:12.118339   30321 command_runner.go:130] > [Unit]
	I0223 17:04:12.118351   30321 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 17:04:12.118361   30321 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 17:04:12.118373   30321 command_runner.go:130] > BindsTo=containerd.service
	I0223 17:04:12.118381   30321 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 17:04:12.118386   30321 command_runner.go:130] > Wants=network-online.target
	I0223 17:04:12.118391   30321 command_runner.go:130] > Requires=docker.socket
	I0223 17:04:12.118395   30321 command_runner.go:130] > StartLimitBurst=3
	I0223 17:04:12.118399   30321 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 17:04:12.118403   30321 command_runner.go:130] > [Service]
	I0223 17:04:12.118406   30321 command_runner.go:130] > Type=notify
	I0223 17:04:12.118410   30321 command_runner.go:130] > Restart=on-failure
	I0223 17:04:12.118417   30321 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 17:04:12.118430   30321 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 17:04:12.118438   30321 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 17:04:12.118445   30321 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 17:04:12.118456   30321 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 17:04:12.118465   30321 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 17:04:12.118474   30321 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 17:04:12.118488   30321 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 17:04:12.118497   30321 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 17:04:12.118503   30321 command_runner.go:130] > ExecStart=
	I0223 17:04:12.118522   30321 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 17:04:12.118533   30321 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 17:04:12.118542   30321 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 17:04:12.118548   30321 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 17:04:12.118553   30321 command_runner.go:130] > LimitNOFILE=infinity
	I0223 17:04:12.118558   30321 command_runner.go:130] > LimitNPROC=infinity
	I0223 17:04:12.118563   30321 command_runner.go:130] > LimitCORE=infinity
	I0223 17:04:12.118570   30321 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 17:04:12.118577   30321 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 17:04:12.118583   30321 command_runner.go:130] > TasksMax=infinity
	I0223 17:04:12.118588   30321 command_runner.go:130] > TimeoutStartSec=0
	I0223 17:04:12.118597   30321 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 17:04:12.118608   30321 command_runner.go:130] > Delegate=yes
	I0223 17:04:12.118616   30321 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 17:04:12.118629   30321 command_runner.go:130] > KillMode=process
	I0223 17:04:12.118642   30321 command_runner.go:130] > [Install]
	I0223 17:04:12.118648   30321 command_runner.go:130] > WantedBy=multi-user.target
	I0223 17:04:12.119210   30321 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:04:12.119274   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:04:12.131181   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:04:12.144963   30321 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 17:04:12.144977   30321 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 17:04:12.145841   30321 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:04:12.253094   30321 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:04:12.320407   30321 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:04:12.320430   30321 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:04:12.361491   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:04:12.439781   30321 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:04:12.673580   30321 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:04:12.743275   30321 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 17:04:12.743401   30321 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 17:04:12.823552   30321 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:04:12.893863   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:04:12.961594   30321 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 17:04:12.981225   30321 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 17:04:12.981312   30321 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 17:04:12.985616   30321 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 17:04:12.985626   30321 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 17:04:12.985631   30321 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0223 17:04:12.985640   30321 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 17:04:12.985647   30321 command_runner.go:130] > Access: 2023-02-24 01:04:12.969300763 +0000
	I0223 17:04:12.985652   30321 command_runner.go:130] > Modify: 2023-02-24 01:04:12.969300763 +0000
	I0223 17:04:12.985656   30321 command_runner.go:130] > Change: 2023-02-24 01:04:12.978300762 +0000
	I0223 17:04:12.985660   30321 command_runner.go:130] >  Birth: -
	I0223 17:04:12.985674   30321 start.go:553] Will wait 60s for crictl version
	I0223 17:04:12.985719   30321 ssh_runner.go:195] Run: which crictl
	I0223 17:04:12.989681   30321 command_runner.go:130] > /usr/bin/crictl
	I0223 17:04:12.989738   30321 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 17:04:13.088268   30321 command_runner.go:130] > Version:  0.1.0
	I0223 17:04:13.088281   30321 command_runner.go:130] > RuntimeName:  docker
	I0223 17:04:13.088285   30321 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 17:04:13.088289   30321 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 17:04:13.090557   30321 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 17:04:13.090640   30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:04:13.114450   30321 command_runner.go:130] > 23.0.1
	I0223 17:04:13.116236   30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:04:13.139645   30321 command_runner.go:130] > 23.0.1
	I0223 17:04:13.185785   30321 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 17:04:13.185936   30321 cli_runner.go:164] Run: docker exec -t multinode-384000 dig +short host.docker.internal
	I0223 17:04:13.298095   30321 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:04:13.298210   30321 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:04:13.302960   30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:04:13.313024   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:13.370101   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:13.370177   30321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:04:13.388852   30321 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 17:04:13.388872   30321 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 17:04:13.388876   30321 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 17:04:13.388882   30321 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 17:04:13.388887   30321 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 17:04:13.388892   30321 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 17:04:13.388897   30321 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 17:04:13.388904   30321 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 17:04:13.390338   30321 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 17:04:13.390351   30321 docker.go:560] Images already preloaded, skipping extraction
	I0223 17:04:13.390440   30321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:04:13.408340   30321 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 17:04:13.408353   30321 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 17:04:13.408357   30321 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 17:04:13.408366   30321 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 17:04:13.408373   30321 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 17:04:13.408385   30321 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 17:04:13.408403   30321 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 17:04:13.408411   30321 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 17:04:13.409739   30321 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 17:04:13.409753   30321 cache_images.go:84] Images are preloaded, skipping loading
	I0223 17:04:13.409855   30321 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:04:13.434482   30321 command_runner.go:130] > cgroupfs
	I0223 17:04:13.436200   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:04:13.436212   30321 cni.go:136] 1 nodes found, recommending kindnet
	I0223 17:04:13.436228   30321 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:04:13.436245   30321 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-384000 NodeName:multinode-384000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:04:13.436376   30321 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-384000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:04:13.436452   30321 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-384000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 17:04:13.436528   30321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 17:04:13.443959   30321 command_runner.go:130] > kubeadm
	I0223 17:04:13.443967   30321 command_runner.go:130] > kubectl
	I0223 17:04:13.443971   30321 command_runner.go:130] > kubelet
	I0223 17:04:13.444627   30321 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:04:13.444682   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 17:04:13.452079   30321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0223 17:04:13.464837   30321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:04:13.478352   30321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0223 17:04:13.493288   30321 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:04:13.497118   30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:04:13.506890   30321 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000 for IP: 192.168.58.2
	I0223 17:04:13.506907   30321 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.507084   30321 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:04:13.507153   30321 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:04:13.507202   30321 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key
	I0223 17:04:13.507218   30321 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt with IP's: []
	I0223 17:04:13.627945   30321 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt ...
	I0223 17:04:13.627961   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt: {Name:mkd359862379ab1055d74401ef8de9196a9ae6b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.628235   30321 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key ...
	I0223 17:04:13.628243   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key: {Name:mkf7637f021e05129181fedc91db0006be87932e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.628430   30321 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key.cee25041
	I0223 17:04:13.628445   30321 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 17:04:13.859191   30321 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt.cee25041 ...
	I0223 17:04:13.859204   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt.cee25041: {Name:mkc78a39ff6b63467a6908b8cbc3acb08372be96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.859458   30321 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key.cee25041 ...
	I0223 17:04:13.859467   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key.cee25041: {Name:mka676fb69891e26111a30d7dfc27b7bc2bb5bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.859653   30321 certs.go:333] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt
	I0223 17:04:13.859802   30321 certs.go:337] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key
	I0223 17:04:13.860385   30321 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key
	I0223 17:04:13.860470   30321 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt with IP's: []
	I0223 17:04:13.917792   30321 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt ...
	I0223 17:04:13.917805   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt: {Name:mk4981f40b576090a5abf96b77f791333731295e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.918048   30321 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key ...
	I0223 17:04:13.918055   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key: {Name:mkcac27c753957cd07ba28de35fc56a0e42e26b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.918221   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 17:04:13.918251   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 17:04:13.918271   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 17:04:13.918337   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 17:04:13.918376   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 17:04:13.918410   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 17:04:13.918427   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 17:04:13.918444   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 17:04:13.918535   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:04:13.918582   30321 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:04:13.918593   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:04:13.918628   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:04:13.918662   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:04:13.918692   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:04:13.918756   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:04:13.918786   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:13.918806   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem -> /usr/share/ca-certificates/24885.pem
	I0223 17:04:13.918844   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /usr/share/ca-certificates/248852.pem
	I0223 17:04:13.919356   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 17:04:13.938023   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 17:04:13.955250   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 17:04:13.972692   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0223 17:04:13.990044   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:04:14.007252   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:04:14.024619   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:04:14.041921   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:04:14.059218   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:04:14.076771   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:04:14.094193   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:04:14.111961   30321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 17:04:14.126469   30321 ssh_runner.go:195] Run: openssl version
	I0223 17:04:14.131792   30321 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 17:04:14.132238   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:04:14.140450   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:14.144477   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:14.144572   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:14.144618   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:14.149818   30321 command_runner.go:130] > b5213941
	I0223 17:04:14.150253   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:04:14.158960   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:04:14.167444   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:04:14.172027   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:04:14.172071   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:04:14.172113   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:04:14.177443   30321 command_runner.go:130] > 51391683
	I0223 17:04:14.177808   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:04:14.186090   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:04:14.194695   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:04:14.198627   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:04:14.198651   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:04:14.198694   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:04:14.203810   30321 command_runner.go:130] > 3ec20f2e
	I0223 17:04:14.204262   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:04:14.212871   30321 kubeadm.go:401] StartCluster: {Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:04:14.212999   30321 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:04:14.232790   30321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 17:04:14.240734   30321 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0223 17:04:14.240745   30321 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0223 17:04:14.240750   30321 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0223 17:04:14.240811   30321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 17:04:14.248331   30321 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 17:04:14.248387   30321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:04:14.256197   30321 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 17:04:14.256208   30321 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 17:04:14.256213   30321 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 17:04:14.256223   30321 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:04:14.256247   30321 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:04:14.256266   30321 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 17:04:14.305883   30321 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 17:04:14.305888   30321 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0223 17:04:14.305924   30321 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 17:04:14.305933   30321 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 17:04:14.413213   30321 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:04:14.413227   30321 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:04:14.413305   30321 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:04:14.413307   30321 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:04:14.413423   30321 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:04:14.413435   30321 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:04:14.547463   30321 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:04:14.547474   30321 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:04:14.589806   30321 out.go:204]   - Generating certificates and keys ...
	I0223 17:04:14.589916   30321 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 17:04:14.589925   30321 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 17:04:14.590004   30321 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 17:04:14.590015   30321 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 17:04:14.794940   30321 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 17:04:14.794953   30321 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 17:04:14.971748   30321 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 17:04:14.971750   30321 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0223 17:04:15.161013   30321 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 17:04:15.161018   30321 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0223 17:04:15.479871   30321 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 17:04:15.479887   30321 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0223 17:04:15.741064   30321 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 17:04:15.741079   30321 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0223 17:04:15.741217   30321 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-384000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 17:04:15.741226   30321 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-384000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 17:04:15.854118   30321 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 17:04:15.854131   30321 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0223 17:04:15.854257   30321 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-384000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 17:04:15.854266   30321 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-384000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 17:04:15.997908   30321 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 17:04:15.997925   30321 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 17:04:16.260935   30321 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 17:04:16.260952   30321 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 17:04:16.444983   30321 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 17:04:16.444998   30321 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0223 17:04:16.445075   30321 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:04:16.445112   30321 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:04:16.550868   30321 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:04:16.550878   30321 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:04:16.859517   30321 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:04:16.859526   30321 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:04:16.897545   30321 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:04:16.897557   30321 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:04:17.001029   30321 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:04:17.001040   30321 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:04:17.011314   30321 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:04:17.011327   30321 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:04:17.012017   30321 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:04:17.012034   30321 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:04:17.012095   30321 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 17:04:17.012108   30321 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 17:04:17.089542   30321 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:04:17.089579   30321 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:04:17.111045   30321 out.go:204]   - Booting up control plane ...
	I0223 17:04:17.111119   30321 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:04:17.111126   30321 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:04:17.111212   30321 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:04:17.111224   30321 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:04:17.111291   30321 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:04:17.111301   30321 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:04:17.111392   30321 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:04:17.111396   30321 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:04:17.111556   30321 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:04:17.111560   30321 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:04:25.097582   30321 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002824 seconds
	I0223 17:04:25.097607   30321 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002824 seconds
	I0223 17:04:25.097826   30321 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 17:04:25.097829   30321 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 17:04:25.105533   30321 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 17:04:25.105554   30321 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 17:04:25.621818   30321 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 17:04:25.621824   30321 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0223 17:04:25.621972   30321 kubeadm.go:322] [mark-control-plane] Marking the node multinode-384000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 17:04:25.621978   30321 command_runner.go:130] > [mark-control-plane] Marking the node multinode-384000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 17:04:26.129954   30321 kubeadm.go:322] [bootstrap-token] Using token: cx2c6w.bbyzuhv5cn3ewcwq
	I0223 17:04:26.129957   30321 command_runner.go:130] > [bootstrap-token] Using token: cx2c6w.bbyzuhv5cn3ewcwq
	I0223 17:04:26.169884   30321 out.go:204]   - Configuring RBAC rules ...
	I0223 17:04:26.169990   30321 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 17:04:26.170008   30321 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 17:04:26.171961   30321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 17:04:26.171976   30321 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 17:04:26.212050   30321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 17:04:26.212052   30321 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 17:04:26.215446   30321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 17:04:26.215458   30321 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 17:04:26.218497   30321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 17:04:26.218502   30321 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 17:04:26.220593   30321 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 17:04:26.220604   30321 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 17:04:26.228549   30321 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 17:04:26.228566   30321 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 17:04:26.374035   30321 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 17:04:26.374049   30321 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 17:04:26.575276   30321 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 17:04:26.575307   30321 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 17:04:26.575752   30321 kubeadm.go:322] 
	I0223 17:04:26.575839   30321 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 17:04:26.575849   30321 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0223 17:04:26.575856   30321 kubeadm.go:322] 
	I0223 17:04:26.575921   30321 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 17:04:26.575927   30321 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0223 17:04:26.575934   30321 kubeadm.go:322] 
	I0223 17:04:26.575962   30321 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 17:04:26.575978   30321 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0223 17:04:26.576039   30321 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 17:04:26.576046   30321 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 17:04:26.576106   30321 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 17:04:26.576112   30321 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 17:04:26.576135   30321 kubeadm.go:322] 
	I0223 17:04:26.576195   30321 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 17:04:26.576202   30321 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0223 17:04:26.576212   30321 kubeadm.go:322] 
	I0223 17:04:26.576255   30321 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 17:04:26.576261   30321 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 17:04:26.576267   30321 kubeadm.go:322] 
	I0223 17:04:26.576329   30321 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 17:04:26.576338   30321 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0223 17:04:26.576415   30321 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 17:04:26.576424   30321 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 17:04:26.576476   30321 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 17:04:26.576482   30321 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 17:04:26.576487   30321 kubeadm.go:322] 
	I0223 17:04:26.576573   30321 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 17:04:26.576581   30321 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0223 17:04:26.576642   30321 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 17:04:26.576648   30321 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0223 17:04:26.576651   30321 kubeadm.go:322] 
	I0223 17:04:26.576711   30321 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cx2c6w.bbyzuhv5cn3ewcwq \
	I0223 17:04:26.576716   30321 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token cx2c6w.bbyzuhv5cn3ewcwq \
	I0223 17:04:26.576804   30321 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 \
	I0223 17:04:26.576810   30321 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 \
	I0223 17:04:26.576824   30321 kubeadm.go:322] 	--control-plane 
	I0223 17:04:26.576828   30321 command_runner.go:130] > 	--control-plane 
	I0223 17:04:26.576830   30321 kubeadm.go:322] 
	I0223 17:04:26.576928   30321 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 17:04:26.576935   30321 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0223 17:04:26.576938   30321 kubeadm.go:322] 
	I0223 17:04:26.577007   30321 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cx2c6w.bbyzuhv5cn3ewcwq \
	I0223 17:04:26.577014   30321 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token cx2c6w.bbyzuhv5cn3ewcwq \
	I0223 17:04:26.577099   30321 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 
	I0223 17:04:26.577110   30321 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 
	I0223 17:04:26.579881   30321 kubeadm.go:322] W0224 01:04:14.298932    1296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 17:04:26.579888   30321 command_runner.go:130] ! W0224 01:04:14.298932    1296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 17:04:26.580034   30321 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 17:04:26.580037   30321 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 17:04:26.580140   30321 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:04:26.580147   30321 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:04:26.580159   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:04:26.580168   30321 cni.go:136] 1 nodes found, recommending kindnet
	I0223 17:04:26.620014   30321 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 17:04:26.656873   30321 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 17:04:26.663017   30321 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 17:04:26.663039   30321 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 17:04:26.663056   30321 command_runner.go:130] > Device: a6h/166d	Inode: 2757559     Links: 1
	I0223 17:04:26.663074   30321 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 17:04:26.663087   30321 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 17:04:26.663101   30321 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 17:04:26.663115   30321 command_runner.go:130] > Change: 2023-02-24 00:41:32.136225471 +0000
	I0223 17:04:26.663126   30321 command_runner.go:130] >  Birth: -
	I0223 17:04:26.663251   30321 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 17:04:26.663263   30321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 17:04:26.680707   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 17:04:27.208803   30321 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0223 17:04:27.212534   30321 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0223 17:04:27.218738   30321 command_runner.go:130] > serviceaccount/kindnet created
	I0223 17:04:27.225995   30321 command_runner.go:130] > daemonset.apps/kindnet created
	I0223 17:04:27.231986   30321 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 17:04:27.232069   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:27.232070   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510 minikube.k8s.io/name=multinode-384000 minikube.k8s.io/updated_at=2023_02_23T17_04_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:27.240193   30321 command_runner.go:130] > -16
	I0223 17:04:27.240226   30321 ops.go:34] apiserver oom_adj: -16
	I0223 17:04:27.311814   30321 command_runner.go:130] > node/multinode-384000 labeled
	I0223 17:04:27.311869   30321 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0223 17:04:27.311940   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:27.408073   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:27.908262   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:27.968202   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:28.408830   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:28.475246   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:28.908280   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:28.967612   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:29.408852   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:29.472978   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:29.908354   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:29.971761   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:30.409067   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:30.474873   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:30.908464   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:30.972109   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:31.408178   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:31.472779   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:31.908615   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:31.972454   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:32.408376   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:32.468254   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:32.908353   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:32.969151   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:33.408382   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:33.470935   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:33.908195   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:33.971920   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:34.408480   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:34.473271   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:34.908383   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:34.973062   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:35.408263   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:35.476833   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:35.908405   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:35.972479   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:36.408363   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:36.472255   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:36.908284   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:36.972764   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:37.408510   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:37.480102   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:37.908328   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:37.969697   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:38.408314   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:38.475073   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:38.908389   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:38.971204   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:39.408365   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:39.472020   30321 command_runner.go:130] > NAME      SECRETS   AGE
	I0223 17:04:39.472031   30321 command_runner.go:130] > default   0         0s
	I0223 17:04:39.475148   30321 kubeadm.go:1073] duration metric: took 12.243277339s to wait for elevateKubeSystemPrivileges.
	I0223 17:04:39.475171   30321 kubeadm.go:403] StartCluster complete in 25.262585357s
	I0223 17:04:39.475192   30321 settings.go:142] acquiring lock: {Name:mk850986f273a9d917e0b12c78b43b3396ccf03c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:39.475263   30321 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:04:39.475780   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/kubeconfig: {Name:mk7d15723b32e59bb8ea0777461e49fb0d77cb39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:39.504308   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 17:04:39.504345   30321 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 17:04:39.504407   30321 addons.go:65] Setting storage-provisioner=true in profile "multinode-384000"
	I0223 17:04:39.504425   30321 addons.go:227] Setting addon storage-provisioner=true in "multinode-384000"
	I0223 17:04:39.504424   30321 addons.go:65] Setting default-storageclass=true in profile "multinode-384000"
	I0223 17:04:39.504455   30321 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-384000"
	I0223 17:04:39.504475   30321 host.go:66] Checking if "multinode-384000" exists ...
	I0223 17:04:39.504487   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:04:39.504724   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:39.504817   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:39.507756   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:04:39.508023   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:04:39.508662   30321 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 17:04:39.508991   30321 round_trippers.go:463] GET https://127.0.0.1:58131/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 17:04:39.508999   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:39.509007   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:39.509013   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:39.543889   30321 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0223 17:04:39.543911   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:39.543921   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:39 GMT
	I0223 17:04:39.543928   30321 round_trippers.go:580]     Audit-Id: db55fe89-e714-4975-8cf3-8d02b7124d3f
	I0223 17:04:39.543937   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:39.543946   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:39.543955   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:39.543963   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:39.543974   30321 round_trippers.go:580]     Content-Length: 291
	I0223 17:04:39.544012   30321 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"228","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 17:04:39.544427   30321 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"228","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 17:04:39.544469   30321 round_trippers.go:463] PUT https://127.0.0.1:58131/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 17:04:39.544475   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:39.544482   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:39.544489   30321 round_trippers.go:473]     Content-Type: application/json
	I0223 17:04:39.544494   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:39.550583   30321 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 17:04:39.550614   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:39.550626   30321 round_trippers.go:580]     Content-Length: 291
	I0223 17:04:39.550637   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:39 GMT
	I0223 17:04:39.550646   30321 round_trippers.go:580]     Audit-Id: 518b0559-3900-4a94-a869-e3e161f29070
	I0223 17:04:39.550657   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:39.550673   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:39.550701   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:39.550712   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:39.551287   30321 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"316","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 17:04:39.577452   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:04:39.577716   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:04:39.599316   30321 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 17:04:39.599660   30321 round_trippers.go:463] GET https://127.0.0.1:58131/apis/storage.k8s.io/v1/storageclasses
	I0223 17:04:39.636236   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:39.636254   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:39.636267   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:39.636282   30321 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 17:04:39.636302   30321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 17:04:39.636449   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:39.640311   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:39.640336   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:39.640345   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:39.640352   30321 round_trippers.go:580]     Content-Length: 109
	I0223 17:04:39.640358   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:39 GMT
	I0223 17:04:39.640363   30321 round_trippers.go:580]     Audit-Id: 072bf91e-cc84-40b5-9243-826b1de57f46
	I0223 17:04:39.640368   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:39.640372   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:39.640377   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:39.640410   30321 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"322"},"items":[]}
	I0223 17:04:39.640683   30321 addons.go:227] Setting addon default-storageclass=true in "multinode-384000"
	I0223 17:04:39.640705   30321 host.go:66] Checking if "multinode-384000" exists ...
	I0223 17:04:39.641150   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:39.645405   30321 command_runner.go:130] > apiVersion: v1
	I0223 17:04:39.645434   30321 command_runner.go:130] > data:
	I0223 17:04:39.645444   30321 command_runner.go:130] >   Corefile: |
	I0223 17:04:39.645456   30321 command_runner.go:130] >     .:53 {
	I0223 17:04:39.645467   30321 command_runner.go:130] >         errors
	I0223 17:04:39.645486   30321 command_runner.go:130] >         health {
	I0223 17:04:39.645505   30321 command_runner.go:130] >            lameduck 5s
	I0223 17:04:39.645513   30321 command_runner.go:130] >         }
	I0223 17:04:39.645525   30321 command_runner.go:130] >         ready
	I0223 17:04:39.645547   30321 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 17:04:39.645558   30321 command_runner.go:130] >            pods insecure
	I0223 17:04:39.645575   30321 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 17:04:39.645587   30321 command_runner.go:130] >            ttl 30
	I0223 17:04:39.645592   30321 command_runner.go:130] >         }
	I0223 17:04:39.645599   30321 command_runner.go:130] >         prometheus :9153
	I0223 17:04:39.645603   30321 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 17:04:39.645609   30321 command_runner.go:130] >            max_concurrent 1000
	I0223 17:04:39.645614   30321 command_runner.go:130] >         }
	I0223 17:04:39.645617   30321 command_runner.go:130] >         cache 30
	I0223 17:04:39.645621   30321 command_runner.go:130] >         loop
	I0223 17:04:39.645624   30321 command_runner.go:130] >         reload
	I0223 17:04:39.645637   30321 command_runner.go:130] >         loadbalance
	I0223 17:04:39.645642   30321 command_runner.go:130] >     }
	I0223 17:04:39.645648   30321 command_runner.go:130] > kind: ConfigMap
	I0223 17:04:39.645652   30321 command_runner.go:130] > metadata:
	I0223 17:04:39.645663   30321 command_runner.go:130] >   creationTimestamp: "2023-02-24T01:04:26Z"
	I0223 17:04:39.645669   30321 command_runner.go:130] >   name: coredns
	I0223 17:04:39.645673   30321 command_runner.go:130] >   namespace: kube-system
	I0223 17:04:39.645682   30321 command_runner.go:130] >   resourceVersion: "224"
	I0223 17:04:39.645688   30321 command_runner.go:130] >   uid: 8e4da503-6c9a-4528-9e22-a1db71461ae8
	I0223 17:04:39.645892   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 17:04:39.712402   30321 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 17:04:39.712416   30321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 17:04:39.712483   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:39.712586   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:39.776504   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:39.967284   30321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 17:04:39.968908   30321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 17:04:40.051562   30321 round_trippers.go:463] GET https://127.0.0.1:58131/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 17:04:40.051586   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.051638   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.051653   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.054834   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:40.054849   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.054856   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.054863   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.054869   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.054874   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.054880   30321 round_trippers.go:580]     Content-Length: 291
	I0223 17:04:40.054889   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.054898   30321 round_trippers.go:580]     Audit-Id: 9002b302-d768-4ecf-b06a-c93d260628cb
	I0223 17:04:40.054916   30321 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"359","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 17:04:40.054982   30321 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-384000" context rescaled to 1 replicas
	I0223 17:04:40.055005   30321 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 17:04:40.055479   30321 command_runner.go:130] > configmap/coredns replaced
	I0223 17:04:40.078224   30321 out.go:177] * Verifying Kubernetes components...
	I0223 17:04:40.078280   30321 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0223 17:04:40.152322   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:04:40.284666   30321 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0223 17:04:40.314959   30321 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0223 17:04:40.319659   30321 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0223 17:04:40.354965   30321 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 17:04:40.360768   30321 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 17:04:40.367125   30321 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0223 17:04:40.374883   30321 command_runner.go:130] > pod/storage-provisioner created
	I0223 17:04:40.381648   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:40.407711   30321 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0223 17:04:40.450427   30321 addons.go:492] enable addons completed in 946.056078ms: enabled=[default-storageclass storage-provisioner]
	I0223 17:04:40.469437   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:04:40.469635   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:04:40.469888   30321 node_ready.go:35] waiting up to 6m0s for node "multinode-384000" to be "Ready" ...
	I0223 17:04:40.469938   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:40.469943   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.469951   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.469957   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.472751   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.472774   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.472780   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.472785   30321 round_trippers.go:580]     Audit-Id: f2708e51-e93b-4c72-893b-657a733849f4
	I0223 17:04:40.472791   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.472797   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.472802   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.472812   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.472898   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:40.473336   30321 node_ready.go:49] node "multinode-384000" has status "Ready":"True"
	I0223 17:04:40.473345   30321 node_ready.go:38] duration metric: took 3.439793ms waiting for node "multinode-384000" to be "Ready" ...
	I0223 17:04:40.473353   30321 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:04:40.473402   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:40.473407   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.473413   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.473418   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.476930   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:40.476947   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.476956   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.476963   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.476972   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.476979   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.476986   30321 round_trippers.go:580]     Audit-Id: eaf7bf69-ef2b-4746-a20e-cca80ce1fa0e
	I0223 17:04:40.476995   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.478350   30321 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"370"},"items":[{"metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"355","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47
f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 60224 chars]
	I0223 17:04:40.480900   30321 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-bvdps" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:40.480955   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:40.480962   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.480968   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.480974   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.483591   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.483604   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.483611   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.483619   30321 round_trippers.go:580]     Audit-Id: ad0a48df-19d1-4c81-909f-b1e0cfa60945
	I0223 17:04:40.483625   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.483631   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.483671   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.483682   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.483795   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"355","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 17:04:40.484066   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:40.484073   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.484079   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.484084   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.486541   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.486552   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.486558   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.486564   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.486568   30321 round_trippers.go:580]     Audit-Id: a77986f4-7773-4253-8f44-c975169bb0dd
	I0223 17:04:40.486573   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.486577   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.486582   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.486893   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:40.987396   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:40.987423   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.987431   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.987437   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.990149   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.990170   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.990183   30321 round_trippers.go:580]     Audit-Id: 1b950b03-4dac-4e47-bf6b-7a0b45057e81
	I0223 17:04:40.990196   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.990208   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.990215   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.990221   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.990226   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.990304   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"355","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 17:04:40.990589   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:40.990596   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.990604   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.990613   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.993057   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.993072   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.993080   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.993086   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.993092   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.993097   30321 round_trippers.go:580]     Audit-Id: ece30c7c-13a2-44a0-8628-2d40eba39982
	I0223 17:04:40.993102   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.993108   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.993164   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:41.487216   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:41.487235   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:41.487242   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:41.487247   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:41.489636   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:41.489652   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:41.489659   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:41.489665   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:41.489673   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:41 GMT
	I0223 17:04:41.489679   30321 round_trippers.go:580]     Audit-Id: 6202d3fd-f784-4864-a52a-08ea30e2a125
	I0223 17:04:41.489688   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:41.489696   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:41.490115   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"355","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 17:04:41.490463   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:41.490471   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:41.490478   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:41.490483   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:41.492960   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:41.492971   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:41.492977   30321 round_trippers.go:580]     Audit-Id: d649c63a-5328-4d65-b9c9-b0516d5f6975
	I0223 17:04:41.492982   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:41.492987   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:41.492991   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:41.492996   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:41.493001   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:41 GMT
	I0223 17:04:41.493072   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:41.987503   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:41.987519   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:41.987528   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:41.987535   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:41.990443   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:41.990456   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:41.990464   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:41.990471   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:41.990478   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:41.990485   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:41 GMT
	I0223 17:04:41.990490   30321 round_trippers.go:580]     Audit-Id: 4f05cb4f-158c-4231-9712-f5cb71c8dbd7
	I0223 17:04:41.990496   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:41.990574   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:41.990844   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:41.990851   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:41.990856   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:41.990865   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:41.993113   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:41.993128   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:41.993138   30321 round_trippers.go:580]     Audit-Id: ee936449-e5e5-4357-9b80-6e42dfcffdd7
	I0223 17:04:41.993145   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:41.993151   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:41.993156   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:41.993161   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:41.993166   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:41 GMT
	I0223 17:04:41.993289   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:42.487870   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:42.487892   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:42.487905   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:42.487916   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:42.491978   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:42.491998   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:42.492006   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:42.492011   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:42.492016   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:42 GMT
	I0223 17:04:42.492020   30321 round_trippers.go:580]     Audit-Id: ee8e1df6-8b00-4abe-ae71-69cd18b3e571
	I0223 17:04:42.492025   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:42.492030   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:42.492104   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:42.492483   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:42.492491   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:42.492497   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:42.492503   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:42.494690   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:42.494701   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:42.494707   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:42.494712   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:42.494720   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:42.494725   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:42 GMT
	I0223 17:04:42.494730   30321 round_trippers.go:580]     Audit-Id: 49af8822-ddb0-43e2-a660-91492a65acfd
	I0223 17:04:42.494736   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:42.494804   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:42.494995   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:42.987325   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:42.987341   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:42.987355   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:42.987369   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:42.990503   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:42.990521   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:42.990528   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:42.990533   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:42 GMT
	I0223 17:04:42.990542   30321 round_trippers.go:580]     Audit-Id: 4da14174-338d-4a5b-89bd-1bc55e0f006c
	I0223 17:04:42.990564   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:42.990593   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:42.990601   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:42.990686   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:42.991043   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:42.991055   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:42.991066   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:42.991078   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:42.997444   30321 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 17:04:42.997465   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:42.997476   30321 round_trippers.go:580]     Audit-Id: f9e2c73f-393d-4ca6-ab25-9bf1ace4b421
	I0223 17:04:42.997484   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:42.997493   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:42.997501   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:42.997509   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:42.997522   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:42 GMT
	I0223 17:04:42.997615   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:43.489176   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:43.489190   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:43.489197   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:43.489202   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:43.492545   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:43.492558   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:43.492563   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:43 GMT
	I0223 17:04:43.492568   30321 round_trippers.go:580]     Audit-Id: 6a4753fc-bcef-4349-833f-19c985476afd
	I0223 17:04:43.492573   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:43.492577   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:43.492582   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:43.492601   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:43.492811   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:43.493104   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:43.493112   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:43.493118   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:43.493123   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:43.495465   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:43.495478   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:43.495490   30321 round_trippers.go:580]     Audit-Id: ee99fc4a-b92f-4686-9f3f-f19252ce8f5b
	I0223 17:04:43.495499   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:43.495510   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:43.495516   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:43.495522   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:43.495526   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:43 GMT
	I0223 17:04:43.495899   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:43.987168   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:43.987185   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:43.987192   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:43.987197   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:43.989970   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:43.989982   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:43.989989   30321 round_trippers.go:580]     Audit-Id: 1eebd99f-e8bd-4bb8-8f00-aaec045b11bd
	I0223 17:04:43.989994   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:43.989999   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:43.990004   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:43.990009   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:43.990014   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:43 GMT
	I0223 17:04:43.990083   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:43.990361   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:43.990370   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:43.990384   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:43.990398   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:43.992599   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:43.992617   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:43.992631   30321 round_trippers.go:580]     Audit-Id: 486d8a13-0bb0-484d-b49b-452ce096560e
	I0223 17:04:43.992641   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:43.992647   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:43.992651   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:43.992656   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:43.992662   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:43 GMT
	I0223 17:04:43.992876   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:44.487443   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:44.487487   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:44.487592   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:44.487605   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:44.492298   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:44.492312   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:44.492318   30321 round_trippers.go:580]     Audit-Id: 8acf453d-b615-4bf7-8074-e74f3a1dc912
	I0223 17:04:44.492325   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:44.492334   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:44.492341   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:44.492348   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:44.492354   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:44 GMT
	I0223 17:04:44.492432   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:44.492724   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:44.492730   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:44.492737   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:44.492742   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:44.494990   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:44.494999   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:44.495005   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:44.495010   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:44.495015   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:44.495020   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:44.495027   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:44 GMT
	I0223 17:04:44.495033   30321 round_trippers.go:580]     Audit-Id: bfa16d57-e46d-43b8-8889-2a2cb311b524
	I0223 17:04:44.495114   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:44.495287   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:44.987618   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:44.987643   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:44.987655   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:44.987665   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:44.991503   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:44.991516   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:44.991522   30321 round_trippers.go:580]     Audit-Id: fa242d57-d807-4a9b-9b12-a2f5afd3fcda
	I0223 17:04:44.991527   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:44.991536   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:44.991543   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:44.991559   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:44.991567   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:44 GMT
	I0223 17:04:44.991634   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:44.991915   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:44.991922   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:44.991928   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:44.991934   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:44.993953   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:44.993964   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:44.993969   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:44.993975   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:44.993979   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:44.993985   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:44 GMT
	I0223 17:04:44.993989   30321 round_trippers.go:580]     Audit-Id: b82d2d1d-2e74-4577-a46b-816a35a0923e
	I0223 17:04:44.993995   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:44.994052   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:45.487349   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:45.487375   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:45.487388   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:45.487397   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:45.491000   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:45.491013   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:45.491019   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:45.491024   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:45 GMT
	I0223 17:04:45.491028   30321 round_trippers.go:580]     Audit-Id: b288164a-ab04-46e5-812a-1d891d4e3d41
	I0223 17:04:45.491033   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:45.491041   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:45.491046   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:45.491353   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:45.491634   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:45.491640   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:45.491646   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:45.491652   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:45.493792   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:45.493803   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:45.493808   30321 round_trippers.go:580]     Audit-Id: 0425ee23-0d59-4b94-acbf-e07e5bf849ac
	I0223 17:04:45.493813   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:45.493818   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:45.493823   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:45.493829   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:45.493833   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:45 GMT
	I0223 17:04:45.493932   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:45.988698   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:45.988713   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:45.988767   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:45.988773   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:45.992827   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:45.992838   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:45.992849   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:45.992855   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:45.992860   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:45.992865   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:45 GMT
	I0223 17:04:45.992870   30321 round_trippers.go:580]     Audit-Id: 1ae792ed-fb0e-4756-beb1-3884d8aacb52
	I0223 17:04:45.992875   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:45.992940   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:45.993213   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:45.993219   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:45.993225   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:45.993230   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:45.995311   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:45.995321   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:45.995326   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:45.995332   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:45 GMT
	I0223 17:04:45.995337   30321 round_trippers.go:580]     Audit-Id: b6dfe7bc-63a9-4655-a67e-987d92c5f38d
	I0223 17:04:45.995343   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:45.995349   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:45.995354   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:45.995402   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:46.488689   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:46.488705   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:46.488712   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:46.488717   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:46.491839   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:46.491850   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:46.491856   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:46 GMT
	I0223 17:04:46.491863   30321 round_trippers.go:580]     Audit-Id: d726c5bf-f539-4592-b838-8e63a45bf193
	I0223 17:04:46.491868   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:46.491872   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:46.491877   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:46.491882   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:46.493129   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:46.493829   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:46.493837   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:46.493843   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:46.493849   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:46.496229   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:46.496240   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:46.496246   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:46.496251   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:46.496256   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:46.496261   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:46 GMT
	I0223 17:04:46.496266   30321 round_trippers.go:580]     Audit-Id: 3ad5e27a-e070-49b7-b987-724afe494bff
	I0223 17:04:46.496271   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:46.496527   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:46.496720   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:46.987737   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:46.987756   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:46.987763   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:46.987768   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:46.990502   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:46.990521   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:46.990527   30321 round_trippers.go:580]     Audit-Id: 36401443-402b-440c-a705-f247b14be0d0
	I0223 17:04:46.990559   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:46.990566   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:46.990571   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:46.990578   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:46.990583   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:46 GMT
	I0223 17:04:46.990661   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:46.990969   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:46.990976   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:46.990982   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:46.990987   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:46.992973   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:04:46.992983   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:46.992989   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:46 GMT
	I0223 17:04:46.992994   30321 round_trippers.go:580]     Audit-Id: c0223492-547c-4c0f-9820-f14bad7e2250
	I0223 17:04:46.992999   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:46.993004   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:46.993010   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:46.993014   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:46.993320   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:47.487718   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:47.487736   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:47.487784   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:47.487790   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:47.490538   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:47.490554   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:47.490560   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:47.490565   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:47.490570   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:47.490575   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:47.490583   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:47 GMT
	I0223 17:04:47.490590   30321 round_trippers.go:580]     Audit-Id: 3f574b37-3fe3-4104-9d6c-5e499a66c939
	I0223 17:04:47.490713   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:47.491061   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:47.491069   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:47.491075   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:47.491081   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:47.493416   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:47.493428   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:47.493433   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:47.493441   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:47.493446   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:47.493451   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:47 GMT
	I0223 17:04:47.493456   30321 round_trippers.go:580]     Audit-Id: 2a5014cd-6cea-4298-b7db-96ffd5fedfcd
	I0223 17:04:47.493461   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:47.493518   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:47.987154   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:47.987173   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:47.987180   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:47.987185   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:47.990169   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:47.990181   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:47.990187   30321 round_trippers.go:580]     Audit-Id: 8ecb152a-1eba-4d6f-9897-4d97af58b987
	I0223 17:04:47.990192   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:47.990197   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:47.990205   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:47.990210   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:47.990215   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:47 GMT
	I0223 17:04:47.990282   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:47.990558   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:47.990564   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:47.990570   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:47.990575   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:47.992547   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:04:47.992556   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:47.992561   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:47 GMT
	I0223 17:04:47.992567   30321 round_trippers.go:580]     Audit-Id: 27d8c3de-f5a1-425c-8c31-3865396da818
	I0223 17:04:47.992573   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:47.992593   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:47.992599   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:47.992604   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:47.992665   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:48.487227   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:48.487243   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:48.487249   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:48.487254   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:48.490078   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:48.490094   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:48.490104   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:48.490115   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:48.490122   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:48 GMT
	I0223 17:04:48.490127   30321 round_trippers.go:580]     Audit-Id: 7327d450-86d2-4148-a09e-c0553854b072
	I0223 17:04:48.490132   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:48.490138   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:48.490340   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:48.490639   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:48.490646   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:48.490652   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:48.490657   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:48.493002   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:48.493014   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:48.493020   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:48.493025   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:48 GMT
	I0223 17:04:48.493030   30321 round_trippers.go:580]     Audit-Id: ac391ddc-5031-4569-bbc0-475e5b876f9c
	I0223 17:04:48.493036   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:48.493040   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:48.493046   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:48.493120   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:48.987155   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:48.987174   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:48.987181   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:48.987186   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:48.990178   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:48.990196   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:48.990205   30321 round_trippers.go:580]     Audit-Id: 1147a35c-397a-4f83-beed-9d93c3e7cf40
	I0223 17:04:48.990227   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:48.990239   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:48.990250   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:48.990259   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:48.990267   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:48 GMT
	I0223 17:04:48.990350   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:48.990674   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:48.990682   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:48.990688   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:48.990696   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:48.992776   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:48.992791   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:48.992798   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:48.992805   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:48.992812   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:48.992820   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:48 GMT
	I0223 17:04:48.992826   30321 round_trippers.go:580]     Audit-Id: 1c4177d6-a7f0-4188-a26f-f20ac7e0a950
	I0223 17:04:48.992831   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:48.992918   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:48.993206   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:49.487133   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:49.487149   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:49.487155   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:49.487161   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:49.489850   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:49.489867   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:49.489876   30321 round_trippers.go:580]     Audit-Id: 2fb5081e-7e45-4bc1-ab45-101006a429fa
	I0223 17:04:49.489883   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:49.489889   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:49.489893   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:49.489899   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:49.489904   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:49 GMT
	I0223 17:04:49.489979   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:49.490327   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:49.490335   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:49.490346   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:49.490359   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:49.492923   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:49.492935   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:49.492941   30321 round_trippers.go:580]     Audit-Id: b2d59498-97c2-478a-8fc9-1260bc886beb
	I0223 17:04:49.492945   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:49.492950   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:49.492955   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:49.492960   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:49.492964   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:49 GMT
	I0223 17:04:49.493031   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:49.988410   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:49.988428   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:49.988434   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:49.988439   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:49.991686   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:49.991700   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:49.991706   30321 round_trippers.go:580]     Audit-Id: fe841beb-cf4e-4208-8222-7d33d5f0c270
	I0223 17:04:49.991711   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:49.991716   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:49.991721   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:49.991727   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:49.991735   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:49 GMT
	I0223 17:04:49.991805   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:49.992102   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:49.992110   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:49.992117   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:49.992124   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:49.994516   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:49.994528   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:49.994536   30321 round_trippers.go:580]     Audit-Id: ccd5c503-91de-48d1-8725-2d831b1a728c
	I0223 17:04:49.994543   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:49.994554   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:49.994568   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:49.994576   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:49.994589   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:49 GMT
	I0223 17:04:49.994821   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:50.488454   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:50.488469   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:50.488475   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:50.488480   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:50.491340   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:50.491353   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:50.491362   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:50 GMT
	I0223 17:04:50.491369   30321 round_trippers.go:580]     Audit-Id: d406bae5-6001-4060-abf9-82fa403f71fd
	I0223 17:04:50.491376   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:50.491383   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:50.491390   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:50.491401   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:50.491533   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:50.491831   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:50.491838   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:50.491844   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:50.491849   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:50.494431   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:50.494448   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:50.494469   30321 round_trippers.go:580]     Audit-Id: 905f57e6-d5f4-4979-a0f7-7963b75e15fa
	I0223 17:04:50.494479   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:50.494486   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:50.494492   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:50.494496   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:50.494502   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:50 GMT
	I0223 17:04:50.494655   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:50.987156   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:50.987176   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:50.987183   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:50.987188   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:50.990283   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:50.990297   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:50.990303   30321 round_trippers.go:580]     Audit-Id: 08fde636-e2a2-43f9-acc1-a0e82ff82505
	I0223 17:04:50.990308   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:50.990316   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:50.990322   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:50.990327   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:50.990331   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:50 GMT
	I0223 17:04:50.990405   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:50.990694   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:50.990700   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:50.990707   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:50.990713   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:50.992898   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:50.992911   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:50.992917   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:50.992922   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:50 GMT
	I0223 17:04:50.992926   30321 round_trippers.go:580]     Audit-Id: f88bc341-c3ea-4612-a233-c230aa898e32
	I0223 17:04:50.992932   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:50.992936   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:50.992945   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:50.993038   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:50.993239   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:51.487146   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:51.487205   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:51.487214   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:51.487223   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:51.489873   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:51.489888   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:51.489896   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:51.489903   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:51.489912   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:51.489919   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:51.489928   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:51 GMT
	I0223 17:04:51.489935   30321 round_trippers.go:580]     Audit-Id: 4e324f53-bda1-4fa2-88b6-55677a1a5719
	I0223 17:04:51.490099   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:51.490384   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:51.490391   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:51.490397   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:51.490402   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:51.492662   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:51.492675   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:51.492682   30321 round_trippers.go:580]     Audit-Id: 590fc50a-d03d-4928-9ac0-8015b6e1aa4a
	I0223 17:04:51.492688   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:51.492696   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:51.492703   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:51.492711   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:51.492716   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:51 GMT
	I0223 17:04:51.492803   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:51.988429   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:51.988442   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:51.988449   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:51.988454   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:51.991262   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:51.991280   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:51.991289   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:51.991302   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:51.991312   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:51.991319   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:51.991325   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:51 GMT
	I0223 17:04:51.991333   30321 round_trippers.go:580]     Audit-Id: a36e6404-412c-47c7-873b-751f3184d5f3
	I0223 17:04:51.991421   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:51.991716   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:51.991723   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:51.991729   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:51.991734   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:51.994015   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:51.994025   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:51.994031   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:51.994036   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:51.994041   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:51.994046   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:51 GMT
	I0223 17:04:51.994050   30321 round_trippers.go:580]     Audit-Id: 0816dcf1-d6ec-40f2-8373-be17980d696a
	I0223 17:04:51.994056   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:51.994111   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:52.488427   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:52.488443   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:52.488450   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:52.488455   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:52.491082   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:52.491100   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:52.491106   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:52.491111   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:52.491122   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:52 GMT
	I0223 17:04:52.491129   30321 round_trippers.go:580]     Audit-Id: 1eb742b1-2554-4024-aaf7-6cd60d723824
	I0223 17:04:52.491134   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:52.491142   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:52.491213   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:52.491509   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:52.491515   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:52.491521   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:52.491527   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:52.493598   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:52.493610   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:52.493616   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:52.493621   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:52.493630   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:52 GMT
	I0223 17:04:52.493637   30321 round_trippers.go:580]     Audit-Id: 17f68baa-da41-43f4-bfc2-67f2a3f7be73
	I0223 17:04:52.493642   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:52.493646   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:52.493846   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:52.987505   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:52.987519   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:52.987526   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:52.987532   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:52.990588   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:52.990600   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:52.990606   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:52.990612   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:52.990618   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:52.990625   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:52 GMT
	I0223 17:04:52.990632   30321 round_trippers.go:580]     Audit-Id: 3521a5da-1e50-406c-a1ce-47bf5d07bdff
	I0223 17:04:52.990637   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:52.991129   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:52.991410   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:52.991417   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:52.991423   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:52.991428   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:52.994095   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:52.994111   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:52.994121   30321 round_trippers.go:580]     Audit-Id: aff1e28d-3ed1-4f54-bd73-34f59d41300c
	I0223 17:04:52.994131   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:52.994140   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:52.994166   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:52.994212   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:52.994236   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:52 GMT
	I0223 17:04:52.994300   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:52.994500   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:53.487229   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:53.487243   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:53.487253   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:53.487261   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:53.490741   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:53.490753   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:53.490760   30321 round_trippers.go:580]     Audit-Id: 96d9e6ae-710b-4dba-b6ae-ca5b86c765f1
	I0223 17:04:53.490764   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:53.490769   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:53.490774   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:53.490780   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:53.490787   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:53 GMT
	I0223 17:04:53.490880   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:53.491285   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:53.491293   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:53.491302   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:53.491311   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:53.494134   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:53.494201   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:53.494216   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:53.494238   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:53 GMT
	I0223 17:04:53.494248   30321 round_trippers.go:580]     Audit-Id: fabcbfdb-c350-466d-b137-c03d10467209
	I0223 17:04:53.494254   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:53.494259   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:53.494264   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:53.494342   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:53.988176   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:53.988197   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:53.988204   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:53.988209   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:53.991495   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:53.991509   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:53.991515   30321 round_trippers.go:580]     Audit-Id: 90fe2a23-80e4-4eb2-abc9-11693888cad4
	I0223 17:04:53.991520   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:53.991526   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:53.991534   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:53.991540   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:53.991544   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:53 GMT
	I0223 17:04:53.991707   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:53.991991   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:53.991997   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:53.992003   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:53.992008   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:53.994192   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:53.994207   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:53.994215   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:53.994230   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:53.994238   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:53 GMT
	I0223 17:04:53.994261   30321 round_trippers.go:580]     Audit-Id: f8ea2450-cca7-438f-bb30-74ef92a97cba
	I0223 17:04:53.994285   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:53.994302   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:53.994381   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:54.489070   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:54.489087   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:54.489094   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:54.489099   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:54.491580   30321 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0223 17:04:54.491592   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:54.491597   30321 round_trippers.go:580]     Audit-Id: 7337d8c7-9cde-4157-9e8e-48d0b0c4a4ba
	I0223 17:04:54.491608   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:54.491614   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:54.491618   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:54.491623   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:54.491628   30321 round_trippers.go:580]     Content-Length: 216
	I0223 17:04:54.491637   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:54 GMT
	I0223 17:04:54.491652   30321 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-bvdps\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-bvdps","kind":"pods"},"code":404}
	I0223 17:04:54.491773   30321 pod_ready.go:97] error getting pod "coredns-787d4945fb-bvdps" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-bvdps" not found
	I0223 17:04:54.491784   30321 pod_ready.go:81] duration metric: took 14.011028799s waiting for pod "coredns-787d4945fb-bvdps" in "kube-system" namespace to be "Ready" ...
	E0223 17:04:54.491790   30321 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-bvdps" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-bvdps" not found
	I0223 17:04:54.491797   30321 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:54.491833   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nlz4z
	I0223 17:04:54.491838   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:54.491844   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:54.491849   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:54.494103   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:54.494115   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:54.494120   30321 round_trippers.go:580]     Audit-Id: 46dade29-c38d-4abc-89c3-3443e8b7aa4c
	I0223 17:04:54.494125   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:54.494130   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:54.494135   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:54.494141   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:54.494148   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:54 GMT
	I0223 17:04:54.494274   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"397","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 17:04:54.494589   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:54.494596   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:54.494603   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:54.494611   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:54.497218   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:54.497235   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:54.497247   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:54.497253   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:54 GMT
	I0223 17:04:54.497258   30321 round_trippers.go:580]     Audit-Id: 017fcd45-c6f2-482e-8186-b7281d97e8ce
	I0223 17:04:54.497262   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:54.497268   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:54.497273   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:54.497336   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:54.998734   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nlz4z
	I0223 17:04:54.998747   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:54.998754   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:54.998759   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.001577   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.001592   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.001598   30321 round_trippers.go:580]     Audit-Id: 3b6d9905-4c85-4f1e-8ea5-890ac7ca9c42
	I0223 17:04:55.001603   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.001608   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.001613   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.001618   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.001623   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.001685   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"397","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 17:04:55.001959   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.001965   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.001971   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.001976   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.003993   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.004004   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.004011   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.004016   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.004023   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.004044   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.004053   30321 round_trippers.go:580]     Audit-Id: 429a9b92-9067-40b4-ac74-9fe4b9865514
	I0223 17:04:55.004059   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.004121   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.498767   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nlz4z
	I0223 17:04:55.498789   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.498802   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.498811   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.502788   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:55.502804   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.502813   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.502820   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.502827   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.502833   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.502840   30321 round_trippers.go:580]     Audit-Id: 50109eea-dace-4cf8-a972-5958051ef888
	I0223 17:04:55.502851   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.502933   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 17:04:55.503317   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.503323   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.503328   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.503334   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.505342   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.505352   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.505357   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.505362   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.505367   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.505372   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.505377   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.505383   30321 round_trippers.go:580]     Audit-Id: 5152107d-1158-4cc2-bc9f-57d4a53f66c7
	I0223 17:04:55.505434   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.505613   30321 pod_ready.go:92] pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.505625   30321 pod_ready.go:81] duration metric: took 1.013829993s waiting for pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.505631   30321 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.505656   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/etcd-multinode-384000
	I0223 17:04:55.505661   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.505667   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.505673   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.507724   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.507734   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.507739   30321 round_trippers.go:580]     Audit-Id: a3dd6bc2-7146-44fc-892d-830c76e12cfc
	I0223 17:04:55.507744   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.507750   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.507755   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.507762   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.507768   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.507817   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-384000","namespace":"kube-system","uid":"c892d753-c892-4834-ba6f-34c4703cfa21","resourceVersion":"266","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"657e2e903e35ddf52c4f23cc480a0a6a","kubernetes.io/config.mirror":"657e2e903e35ddf52c4f23cc480a0a6a","kubernetes.io/config.seen":"2023-02-24T01:04:26.472791839Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 17:04:55.508038   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.508044   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.508050   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.508057   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.510229   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.510240   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.510248   30321 round_trippers.go:580]     Audit-Id: 4bba5138-84c9-408e-a7c7-8bbf5defbfd4
	I0223 17:04:55.510268   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.510277   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.510286   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.510293   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.510298   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.510350   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.510535   30321 pod_ready.go:92] pod "etcd-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.510540   30321 pod_ready.go:81] duration metric: took 4.904985ms waiting for pod "etcd-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.510548   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.510579   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-384000
	I0223 17:04:55.510583   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.510589   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.510595   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.512818   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.512829   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.512837   30321 round_trippers.go:580]     Audit-Id: 5e83b0dd-9d44-449a-9934-e776d61910a1
	I0223 17:04:55.512845   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.512850   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.512856   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.512863   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.512868   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.512941   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-384000","namespace":"kube-system","uid":"c42cb310-4d3e-44ed-aa9c-0f0bc12249d1","resourceVersion":"261","creationTimestamp":"2023-02-24T01:04:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b8de13a5a84c1ea264205bf0af6c4906","kubernetes.io/config.mirror":"b8de13a5a84c1ea264205bf0af6c4906","kubernetes.io/config.seen":"2023-02-24T01:04:17.403781278Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 17:04:55.513208   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.513214   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.513220   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.513225   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.515528   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.515537   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.515542   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.515548   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.515554   30321 round_trippers.go:580]     Audit-Id: 62a16788-8ae8-458c-94d6-da0d2fb90772
	I0223 17:04:55.515558   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.515564   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.515570   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.515702   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.515873   30321 pod_ready.go:92] pod "kube-apiserver-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.515878   30321 pod_ready.go:81] duration metric: took 5.324863ms waiting for pod "kube-apiserver-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.515884   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.515920   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-384000
	I0223 17:04:55.515926   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.515934   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.515942   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.518008   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.518017   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.518022   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.518027   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.518032   30321 round_trippers.go:580]     Audit-Id: 535b01f2-bb83-4118-9c9a-6247b64d1224
	I0223 17:04:55.518037   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.518041   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.518047   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.518120   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-384000","namespace":"kube-system","uid":"ac83dab3-bb77-4542-9452-419c3f5087cb","resourceVersion":"264","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb6cd6332c76e8ae5bfced6be99d18bd","kubernetes.io/config.mirror":"cb6cd6332c76e8ae5bfced6be99d18bd","kubernetes.io/config.seen":"2023-02-24T01:04:26.472807208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 17:04:55.518390   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.518396   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.518402   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.518407   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.520439   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.520449   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.520455   30321 round_trippers.go:580]     Audit-Id: 0b7b3abc-53ec-4198-8e4e-bab3fd1d4f9c
	I0223 17:04:55.520460   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.520465   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.520471   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.520475   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.520481   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.520526   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.520692   30321 pod_ready.go:92] pod "kube-controller-manager-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.520697   30321 pod_ready.go:81] duration metric: took 4.809125ms waiting for pod "kube-controller-manager-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.520705   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wmsxr" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.520734   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-wmsxr
	I0223 17:04:55.520739   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.520746   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.520752   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.523043   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.523055   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.523064   30321 round_trippers.go:580]     Audit-Id: 9ba134ed-916c-43ef-810b-9cf02c133feb
	I0223 17:04:55.523072   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.523080   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.523087   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.523094   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.523101   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.523157   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wmsxr","generateName":"kube-proxy-","namespace":"kube-system","uid":"6d046618-e274-4a16-8846-14837962c18d","resourceVersion":"391","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 17:04:55.523393   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.523399   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.523405   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.523411   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.525430   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.525439   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.525447   30321 round_trippers.go:580]     Audit-Id: dfcaef41-2870-4718-af15-de2813bbd7eb
	I0223 17:04:55.525453   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.525458   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.525464   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.525468   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.525474   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.525525   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.525693   30321 pod_ready.go:92] pod "kube-proxy-wmsxr" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.525699   30321 pod_ready.go:81] duration metric: took 4.989131ms waiting for pod "kube-proxy-wmsxr" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.525704   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.698837   30321 request.go:622] Waited for 173.072526ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-384000
	I0223 17:04:55.698866   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-384000
	I0223 17:04:55.698872   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.698879   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.698884   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.701143   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.701154   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.701160   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.701165   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.701170   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.701175   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.701180   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.701185   30321 round_trippers.go:580]     Audit-Id: 46f23c4d-0ccf-4f7b-b582-0604eb932c30
	I0223 17:04:55.701243   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-384000","namespace":"kube-system","uid":"f914009d-3787-433d-8e3e-2f597d741c7e","resourceVersion":"279","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7ca5a1853e73c27545a20428af78eb37","kubernetes.io/config.mirror":"7ca5a1853e73c27545a20428af78eb37","kubernetes.io/config.seen":"2023-02-24T01:04:26.472807884Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 17:04:55.899379   30321 request.go:622] Waited for 197.898951ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.899510   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.899526   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.899538   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.899547   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.903628   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:55.903642   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.903650   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.903657   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.903665   30321 round_trippers.go:580]     Audit-Id: 387c2cfb-ab5d-4889-b421-949d269d7e27
	I0223 17:04:55.903671   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.903677   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.903684   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.903756   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.903986   30321 pod_ready.go:92] pod "kube-scheduler-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.903992   30321 pod_ready.go:81] duration metric: took 378.288092ms waiting for pod "kube-scheduler-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.903999   30321 pod_ready.go:38] duration metric: took 15.43081074s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:04:55.904013   30321 api_server.go:51] waiting for apiserver process to appear ...
	I0223 17:04:55.904070   30321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:04:55.913724   30321 command_runner.go:130] > 1883
	I0223 17:04:55.914492   30321 api_server.go:71] duration metric: took 15.859632508s to wait for apiserver process to appear ...
	I0223 17:04:55.914501   30321 api_server.go:87] waiting for apiserver healthz status ...
	I0223 17:04:55.914513   30321 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58131/healthz ...
	I0223 17:04:55.919164   30321 api_server.go:278] https://127.0.0.1:58131/healthz returned 200:
	ok
	I0223 17:04:55.919195   30321 round_trippers.go:463] GET https://127.0.0.1:58131/version
	I0223 17:04:55.919200   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.919206   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.919213   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.920562   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:04:55.920571   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.920577   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.920582   30321 round_trippers.go:580]     Content-Length: 263
	I0223 17:04:55.920587   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.920592   30321 round_trippers.go:580]     Audit-Id: b3ee4d8e-5bf5-4c55-9828-eb3d85629b10
	I0223 17:04:55.920599   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.920604   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.920609   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.920621   30321 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 17:04:55.920661   30321 api_server.go:140] control plane version: v1.26.1
	I0223 17:04:55.920667   30321 api_server.go:130] duration metric: took 6.162432ms to wait for apiserver health ...
	I0223 17:04:55.920671   30321 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 17:04:56.099525   30321 request.go:622] Waited for 178.804731ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:56.099558   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:56.099565   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:56.099572   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:56.099580   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:56.103049   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:56.103059   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:56.103064   30321 round_trippers.go:580]     Audit-Id: cbe26173-1ea5-401b-b0d0-b634efd79e7f
	I0223 17:04:56.103069   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:56.103074   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:56.103079   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:56.103087   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:56.103093   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:56 GMT
	I0223 17:04:56.104354   30321 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 17:04:56.105681   30321 system_pods.go:59] 8 kube-system pods found
	I0223 17:04:56.105696   30321 system_pods.go:61] "coredns-787d4945fb-nlz4z" [08aa5e04-355e-44b5-a80e-38f3491700e7] Running
	I0223 17:04:56.105701   30321 system_pods.go:61] "etcd-multinode-384000" [c892d753-c892-4834-ba6f-34c4703cfa21] Running
	I0223 17:04:56.105705   30321 system_pods.go:61] "kindnet-n4mpj" [6ef38cba-f7c8-4063-a588-dfd2146fd0a4] Running
	I0223 17:04:56.105708   30321 system_pods.go:61] "kube-apiserver-multinode-384000" [c42cb310-4d3e-44ed-aa9c-0f0bc12249d1] Running
	I0223 17:04:56.105712   30321 system_pods.go:61] "kube-controller-manager-multinode-384000" [ac83dab3-bb77-4542-9452-419c3f5087cb] Running
	I0223 17:04:56.105717   30321 system_pods.go:61] "kube-proxy-wmsxr" [6d046618-e274-4a16-8846-14837962c18d] Running
	I0223 17:04:56.105723   30321 system_pods.go:61] "kube-scheduler-multinode-384000" [f914009d-3787-433d-8e3e-2f597d741c7e] Running
	I0223 17:04:56.105727   30321 system_pods.go:61] "storage-provisioner" [babcd4ec-0d31-417d-a81b-137955e9c31e] Running
	I0223 17:04:56.105731   30321 system_pods.go:74] duration metric: took 185.057517ms to wait for pod list to return data ...
	I0223 17:04:56.105740   30321 default_sa.go:34] waiting for default service account to be created ...
	I0223 17:04:56.300366   30321 request.go:622] Waited for 194.494463ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/default/serviceaccounts
	I0223 17:04:56.300421   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/default/serviceaccounts
	I0223 17:04:56.300429   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:56.300441   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:56.300451   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:56.304478   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:56.304495   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:56.304505   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:56.304521   30321 round_trippers.go:580]     Content-Length: 261
	I0223 17:04:56.304529   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:56 GMT
	I0223 17:04:56.304537   30321 round_trippers.go:580]     Audit-Id: a6274a8c-ce66-435f-b002-b64602e18ead
	I0223 17:04:56.304543   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:56.304549   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:56.304559   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:56.304640   30321 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"02334fa8-6050-4807-b381-d1bf7926ee40","resourceVersion":"312","creationTimestamp":"2023-02-24T01:04:39Z"}}]}
	I0223 17:04:56.304782   30321 default_sa.go:45] found service account: "default"
	I0223 17:04:56.304791   30321 default_sa.go:55] duration metric: took 199.048533ms for default service account to be created ...
	I0223 17:04:56.304803   30321 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 17:04:56.500241   30321 request.go:622] Waited for 195.358089ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:56.500307   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:56.500386   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:56.500400   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:56.500447   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:56.505816   30321 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 17:04:56.505829   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:56.505835   30321 round_trippers.go:580]     Audit-Id: 1b56132a-de22-496e-86d1-a8b285f524ad
	I0223 17:04:56.505840   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:56.505846   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:56.505851   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:56.505859   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:56.505866   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:56 GMT
	I0223 17:04:56.506201   30321 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 17:04:56.507473   30321 system_pods.go:86] 8 kube-system pods found
	I0223 17:04:56.507482   30321 system_pods.go:89] "coredns-787d4945fb-nlz4z" [08aa5e04-355e-44b5-a80e-38f3491700e7] Running
	I0223 17:04:56.507486   30321 system_pods.go:89] "etcd-multinode-384000" [c892d753-c892-4834-ba6f-34c4703cfa21] Running
	I0223 17:04:56.507490   30321 system_pods.go:89] "kindnet-n4mpj" [6ef38cba-f7c8-4063-a588-dfd2146fd0a4] Running
	I0223 17:04:56.507493   30321 system_pods.go:89] "kube-apiserver-multinode-384000" [c42cb310-4d3e-44ed-aa9c-0f0bc12249d1] Running
	I0223 17:04:56.507497   30321 system_pods.go:89] "kube-controller-manager-multinode-384000" [ac83dab3-bb77-4542-9452-419c3f5087cb] Running
	I0223 17:04:56.507502   30321 system_pods.go:89] "kube-proxy-wmsxr" [6d046618-e274-4a16-8846-14837962c18d] Running
	I0223 17:04:56.507505   30321 system_pods.go:89] "kube-scheduler-multinode-384000" [f914009d-3787-433d-8e3e-2f597d741c7e] Running
	I0223 17:04:56.507509   30321 system_pods.go:89] "storage-provisioner" [babcd4ec-0d31-417d-a81b-137955e9c31e] Running
	I0223 17:04:56.507513   30321 system_pods.go:126] duration metric: took 202.708387ms to wait for k8s-apps to be running ...
	I0223 17:04:56.507519   30321 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 17:04:56.507576   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:04:56.517668   30321 system_svc.go:56] duration metric: took 10.144194ms WaitForService to wait for kubelet.
	I0223 17:04:56.517681   30321 kubeadm.go:578] duration metric: took 16.462829245s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 17:04:56.517696   30321 node_conditions.go:102] verifying NodePressure condition ...
	I0223 17:04:56.699314   30321 request.go:622] Waited for 181.572423ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/nodes
	I0223 17:04:56.699359   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes
	I0223 17:04:56.699368   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:56.699378   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:56.699427   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:56.702684   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:56.702697   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:56.702703   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:56.702708   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:56.702713   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:56.702718   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:56 GMT
	I0223 17:04:56.702723   30321 round_trippers.go:580]     Audit-Id: b3bf67a2-ffb8-4c1d-8d1d-c0f04b9a7c1a
	I0223 17:04:56.702728   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:56.702788   30321 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5007 chars]
	I0223 17:04:56.703003   30321 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:04:56.703014   30321 node_conditions.go:123] node cpu capacity is 6
	I0223 17:04:56.703024   30321 node_conditions.go:105] duration metric: took 185.326028ms to run NodePressure ...
	I0223 17:04:56.703032   30321 start.go:228] waiting for startup goroutines ...
	I0223 17:04:56.703038   30321 start.go:233] waiting for cluster config update ...
	I0223 17:04:56.703046   30321 start.go:242] writing updated cluster config ...
	I0223 17:04:56.725399   30321 out.go:177] 
	I0223 17:04:56.746945   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:04:56.747045   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:04:56.769778   30321 out.go:177] * Starting worker node multinode-384000-m02 in cluster multinode-384000
	I0223 17:04:56.812657   30321 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:04:56.835542   30321 out.go:177] * Pulling base image ...
	I0223 17:04:56.895701   30321 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:04:56.895717   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:56.895743   30321 cache.go:57] Caching tarball of preloaded images
	I0223 17:04:56.895977   30321 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:04:56.896004   30321 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 17:04:56.896140   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:04:56.952914   30321 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:04:56.952934   30321 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:04:56.952951   30321 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:04:56.952991   30321 start.go:364] acquiring machines lock for multinode-384000-m02: {Name:mk1527be69dd402dbd34e5a5f430e92116796580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:04:56.953144   30321 start.go:368] acquired machines lock for "multinode-384000-m02" in 140.195µs
	I0223 17:04:56.953171   30321 start.go:93] Provisioning new machine with config: &{Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 17:04:56.953241   30321 start.go:125] createHost starting for "m02" (driver="docker")
	I0223 17:04:56.974704   30321 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 17:04:56.974925   30321 start.go:159] libmachine.API.Create for "multinode-384000" (driver="docker")
	I0223 17:04:56.974960   30321 client.go:168] LocalClient.Create starting
	I0223 17:04:56.975194   30321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem
	I0223 17:04:56.975309   30321 main.go:141] libmachine: Decoding PEM data...
	I0223 17:04:56.975336   30321 main.go:141] libmachine: Parsing certificate...
	I0223 17:04:56.975438   30321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem
	I0223 17:04:56.975511   30321 main.go:141] libmachine: Decoding PEM data...
	I0223 17:04:56.975528   30321 main.go:141] libmachine: Parsing certificate...
	I0223 17:04:56.996371   30321 cli_runner.go:164] Run: docker network inspect multinode-384000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 17:04:57.052103   30321 network_create.go:76] Found existing network {name:multinode-384000 subnet:0xc0001464e0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0223 17:04:57.052150   30321 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-384000-m02" container
	I0223 17:04:57.052267   30321 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 17:04:57.109290   30321 cli_runner.go:164] Run: docker volume create multinode-384000-m02 --label name.minikube.sigs.k8s.io=multinode-384000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0223 17:04:57.164739   30321 oci.go:103] Successfully created a docker volume multinode-384000-m02
	I0223 17:04:57.164878   30321 cli_runner.go:164] Run: docker run --rm --name multinode-384000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-384000-m02 --entrypoint /usr/bin/test -v multinode-384000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 17:04:57.621060   30321 oci.go:107] Successfully prepared a docker volume multinode-384000-m02
	I0223 17:04:57.621098   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:57.621110   30321 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 17:04:57.621231   30321 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-384000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 17:05:04.077983   30321 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-384000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.456736337s)
	I0223 17:05:04.078008   30321 kic.go:199] duration metric: took 6.456968 seconds to extract preloaded images to volume
	I0223 17:05:04.078132   30321 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 17:05:04.221365   30321 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-384000-m02 --name multinode-384000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-384000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-384000-m02 --network multinode-384000 --ip 192.168.58.3 --volume multinode-384000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 17:05:04.592292   30321 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Running}}
	I0223 17:05:04.655943   30321 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Status}}
	I0223 17:05:04.745777   30321 cli_runner.go:164] Run: docker exec multinode-384000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0223 17:05:04.851906   30321 oci.go:144] the created container "multinode-384000-m02" has a running status.
	I0223 17:05:04.851934   30321 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa...
	I0223 17:05:05.035027   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 17:05:05.035093   30321 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 17:05:05.140942   30321 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Status}}
	I0223 17:05:05.201256   30321 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 17:05:05.201277   30321 kic_runner.go:114] Args: [docker exec --privileged multinode-384000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 17:05:05.310563   30321 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Status}}
	I0223 17:05:05.368035   30321 machine.go:88] provisioning docker machine ...
	I0223 17:05:05.368075   30321 ubuntu.go:169] provisioning hostname "multinode-384000-m02"
	I0223 17:05:05.368180   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:05.426449   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:05.426841   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:05.426851   30321 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-384000-m02 && echo "multinode-384000-m02" | sudo tee /etc/hostname
	I0223 17:05:05.569180   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-384000-m02
	
	I0223 17:05:05.569295   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:05.628418   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:05.628769   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:05.628786   30321 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-384000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-384000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-384000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:05:05.764951   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:05:05.764973   30321 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:05:05.764987   30321 ubuntu.go:177] setting up certificates
	I0223 17:05:05.764993   30321 provision.go:83] configureAuth start
	I0223 17:05:05.765071   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000-m02
	I0223 17:05:05.822213   30321 provision.go:138] copyHostCerts
	I0223 17:05:05.822267   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:05:05.822324   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:05:05.822330   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:05:05.822445   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:05:05.822609   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:05:05.822639   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:05:05.822644   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:05:05.822706   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:05:05.822824   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:05:05.822858   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:05:05.822862   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:05:05.822924   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:05:05.823045   30321 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.multinode-384000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-384000-m02]
	I0223 17:05:05.990734   30321 provision.go:172] copyRemoteCerts
	I0223 17:05:05.990799   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:05:05.990857   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:06.048588   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:06.143936   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 17:05:06.144018   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:05:06.161517   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 17:05:06.161592   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 17:05:06.179781   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 17:05:06.179873   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 17:05:06.197146   30321 provision.go:86] duration metric: configureAuth took 432.148846ms
	I0223 17:05:06.197159   30321 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:05:06.197315   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:05:06.197385   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:06.255268   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:06.255640   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:06.255652   30321 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:05:06.392853   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:05:06.392884   30321 ubuntu.go:71] root file system type: overlay
	I0223 17:05:06.392997   30321 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:05:06.393086   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:06.450967   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:06.451322   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:06.451378   30321 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:05:06.593100   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:05:06.593197   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:06.652287   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:06.652691   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:06.652705   30321 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:05:07.278808   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:05:06.590454180 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 17:05:07.278835   30321 machine.go:91] provisioned docker machine in 1.910797134s
	I0223 17:05:07.278841   30321 client.go:171] LocalClient.Create took 10.303988911s
	I0223 17:05:07.278857   30321 start.go:167] duration metric: libmachine.API.Create for "multinode-384000" took 10.304049402s
	I0223 17:05:07.278864   30321 start.go:300] post-start starting for "multinode-384000-m02" (driver="docker")
	I0223 17:05:07.278874   30321 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:05:07.278956   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:05:07.279015   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:07.338828   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:07.434171   30321 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:05:07.437869   30321 command_runner.go:130] > NAME="Ubuntu"
	I0223 17:05:07.437879   30321 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 17:05:07.437883   30321 command_runner.go:130] > ID=ubuntu
	I0223 17:05:07.437887   30321 command_runner.go:130] > ID_LIKE=debian
	I0223 17:05:07.437894   30321 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 17:05:07.437899   30321 command_runner.go:130] > VERSION_ID="20.04"
	I0223 17:05:07.437903   30321 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 17:05:07.437909   30321 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 17:05:07.437914   30321 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 17:05:07.437921   30321 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 17:05:07.437926   30321 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 17:05:07.437936   30321 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 17:05:07.437990   30321 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:05:07.438006   30321 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:05:07.438013   30321 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:05:07.438017   30321 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:05:07.438023   30321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:05:07.438115   30321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:05:07.438288   30321 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:05:07.438293   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /etc/ssl/certs/248852.pem
	I0223 17:05:07.438480   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:05:07.445883   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:05:07.463353   30321 start.go:303] post-start completed in 184.476962ms
	I0223 17:05:07.463880   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000-m02
	I0223 17:05:07.521526   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:05:07.521986   30321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:05:07.522116   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:07.579004   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:07.670593   30321 command_runner.go:130] > 6%!
	(MISSING)I0223 17:05:07.670679   30321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:05:07.675002   30321 command_runner.go:130] > 92G
	I0223 17:05:07.675361   30321 start.go:128] duration metric: createHost completed in 10.722232854s
	I0223 17:05:07.675374   30321 start.go:83] releasing machines lock for "multinode-384000-m02", held for 10.722341134s
	I0223 17:05:07.675460   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000-m02
	I0223 17:05:07.756730   30321 out.go:177] * Found network options:
	I0223 17:05:07.777737   30321 out.go:177]   - NO_PROXY=192.168.58.2
	W0223 17:05:07.799691   30321 proxy.go:119] fail to check proxy env: Error ip not in block
	W0223 17:05:07.799742   30321 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 17:05:07.799873   30321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 17:05:07.799981   30321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 17:05:07.799987   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:07.800095   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:07.862121   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:07.862223   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:08.010244   30321 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 17:05:08.010300   30321 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 17:05:08.010307   30321 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 17:05:08.010314   30321 command_runner.go:130] > Device: 10001bh/1048603d	Inode: 2885211     Links: 1
	I0223 17:05:08.010319   30321 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 17:05:08.010325   30321 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 17:05:08.010330   30321 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 17:05:08.010335   30321 command_runner.go:130] > Change: 2023-02-24 00:41:32.964225417 +0000
	I0223 17:05:08.010340   30321 command_runner.go:130] >  Birth: -
	I0223 17:05:08.010432   30321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 17:05:08.031166   30321 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 17:05:08.031244   30321 ssh_runner.go:195] Run: which cri-dockerd
	I0223 17:05:08.035387   30321 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 17:05:08.035474   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 17:05:08.042907   30321 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 17:05:08.056004   30321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 17:05:08.070780   30321 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 17:05:08.070824   30321 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 17:05:08.070835   30321 start.go:485] detecting cgroup driver to use...
	I0223 17:05:08.070849   30321 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:05:08.070931   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:05:08.083682   30321 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 17:05:08.083697   30321 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 17:05:08.084488   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 17:05:08.093536   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:05:08.102147   30321 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:05:08.102214   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:05:08.111079   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:05:08.119479   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:05:08.127924   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:05:08.136205   30321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:05:08.144205   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:05:08.152782   30321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:05:08.159430   30321 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 17:05:08.160102   30321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:05:08.167247   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:05:08.251984   30321 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:05:08.323259   30321 start.go:485] detecting cgroup driver to use...
	I0223 17:05:08.323279   30321 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:05:08.323342   30321 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:05:08.339960   30321 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 17:05:08.340119   30321 command_runner.go:130] > [Unit]
	I0223 17:05:08.340128   30321 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 17:05:08.340137   30321 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 17:05:08.340145   30321 command_runner.go:130] > BindsTo=containerd.service
	I0223 17:05:08.340153   30321 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 17:05:08.340158   30321 command_runner.go:130] > Wants=network-online.target
	I0223 17:05:08.340198   30321 command_runner.go:130] > Requires=docker.socket
	I0223 17:05:08.340215   30321 command_runner.go:130] > StartLimitBurst=3
	I0223 17:05:08.340227   30321 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 17:05:08.340235   30321 command_runner.go:130] > [Service]
	I0223 17:05:08.340240   30321 command_runner.go:130] > Type=notify
	I0223 17:05:08.340245   30321 command_runner.go:130] > Restart=on-failure
	I0223 17:05:08.340250   30321 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0223 17:05:08.340256   30321 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 17:05:08.340281   30321 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 17:05:08.340289   30321 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 17:05:08.340295   30321 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 17:05:08.340303   30321 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 17:05:08.340308   30321 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 17:05:08.340314   30321 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 17:05:08.340327   30321 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 17:05:08.340336   30321 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 17:05:08.340339   30321 command_runner.go:130] > ExecStart=
	I0223 17:05:08.340350   30321 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 17:05:08.340355   30321 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 17:05:08.340360   30321 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 17:05:08.340365   30321 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 17:05:08.340369   30321 command_runner.go:130] > LimitNOFILE=infinity
	I0223 17:05:08.340372   30321 command_runner.go:130] > LimitNPROC=infinity
	I0223 17:05:08.340378   30321 command_runner.go:130] > LimitCORE=infinity
	I0223 17:05:08.340382   30321 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 17:05:08.340387   30321 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 17:05:08.340390   30321 command_runner.go:130] > TasksMax=infinity
	I0223 17:05:08.340393   30321 command_runner.go:130] > TimeoutStartSec=0
	I0223 17:05:08.340399   30321 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 17:05:08.340402   30321 command_runner.go:130] > Delegate=yes
	I0223 17:05:08.340411   30321 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 17:05:08.340415   30321 command_runner.go:130] > KillMode=process
	I0223 17:05:08.340418   30321 command_runner.go:130] > [Install]
	I0223 17:05:08.340423   30321 command_runner.go:130] > WantedBy=multi-user.target
	I0223 17:05:08.340775   30321 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:05:08.340851   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:05:08.351191   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:05:08.364825   30321 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 17:05:08.364838   30321 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 17:05:08.365884   30321 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:05:08.464645   30321 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:05:08.557690   30321 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:05:08.557707   30321 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:05:08.571730   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:05:08.666053   30321 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:05:08.890228   30321 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:05:08.972036   30321 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 17:05:08.972111   30321 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 17:05:09.042258   30321 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:05:09.117012   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:05:09.190875   30321 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 17:05:09.202546   30321 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 17:05:09.202636   30321 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 17:05:09.206733   30321 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 17:05:09.206746   30321 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 17:05:09.206753   30321 command_runner.go:130] > Device: 100023h/1048611d	Inode: 206         Links: 1
	I0223 17:05:09.206762   30321 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 17:05:09.206768   30321 command_runner.go:130] > Access: 2023-02-24 01:05:09.198454010 +0000
	I0223 17:05:09.206775   30321 command_runner.go:130] > Modify: 2023-02-24 01:05:09.198454010 +0000
	I0223 17:05:09.206784   30321 command_runner.go:130] > Change: 2023-02-24 01:05:09.199454010 +0000
	I0223 17:05:09.206795   30321 command_runner.go:130] >  Birth: -
	I0223 17:05:09.206816   30321 start.go:553] Will wait 60s for crictl version
	I0223 17:05:09.206888   30321 ssh_runner.go:195] Run: which crictl
	I0223 17:05:09.210420   30321 command_runner.go:130] > /usr/bin/crictl
	I0223 17:05:09.210468   30321 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 17:05:09.303225   30321 command_runner.go:130] > Version:  0.1.0
	I0223 17:05:09.303238   30321 command_runner.go:130] > RuntimeName:  docker
	I0223 17:05:09.303242   30321 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 17:05:09.303246   30321 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 17:05:09.305294   30321 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 17:05:09.305380   30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:05:09.330876   30321 command_runner.go:130] > 23.0.1
	I0223 17:05:09.332433   30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:05:09.356414   30321 command_runner.go:130] > 23.0.1
	I0223 17:05:09.400447   30321 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 17:05:09.421390   30321 out.go:177]   - env NO_PROXY=192.168.58.2
	I0223 17:05:09.442812   30321 cli_runner.go:164] Run: docker exec -t multinode-384000-m02 dig +short host.docker.internal
	I0223 17:05:09.559246   30321 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:05:09.559354   30321 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:05:09.563904   30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:05:09.574258   30321 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000 for IP: 192.168.58.3
	I0223 17:05:09.574275   30321 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:05:09.574462   30321 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:05:09.574527   30321 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:05:09.574543   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 17:05:09.574567   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 17:05:09.574585   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 17:05:09.574609   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 17:05:09.574707   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:05:09.574757   30321 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:05:09.574768   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:05:09.574827   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:05:09.574868   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:05:09.574898   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:05:09.574966   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:05:09.575001   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem -> /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.575021   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.575044   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.575346   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:05:09.592916   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:05:09.610527   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:05:09.628140   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:05:09.645408   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:05:09.662952   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:05:09.695538   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:05:09.713051   30321 ssh_runner.go:195] Run: openssl version
	I0223 17:05:09.718333   30321 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 17:05:09.718660   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:05:09.726987   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.730907   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.730990   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.731036   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.736123   30321 command_runner.go:130] > 3ec20f2e
	I0223 17:05:09.736564   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:05:09.744982   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:05:09.753293   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.757358   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.757469   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.757531   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.762804   30321 command_runner.go:130] > b5213941
	I0223 17:05:09.763200   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:05:09.771538   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:05:09.779871   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.783786   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.783818   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.783859   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.789022   30321 command_runner.go:130] > 51391683
	I0223 17:05:09.789291   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:05:09.797739   30321 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:05:09.822341   30321 command_runner.go:130] > cgroupfs
	I0223 17:05:09.824054   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:05:09.824066   30321 cni.go:136] 2 nodes found, recommending kindnet
	I0223 17:05:09.824073   30321 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:05:09.824085   30321 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-384000 NodeName:multinode-384000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:05:09.824161   30321 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-384000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:05:09.824206   30321 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-384000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 17:05:09.824271   30321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 17:05:09.831786   30321 command_runner.go:130] > kubeadm
	I0223 17:05:09.831795   30321 command_runner.go:130] > kubectl
	I0223 17:05:09.831802   30321 command_runner.go:130] > kubelet
	I0223 17:05:09.832530   30321 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:05:09.832588   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0223 17:05:09.840179   30321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0223 17:05:09.853476   30321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:05:09.866808   30321 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:05:09.870658   30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:05:09.880714   30321 host.go:66] Checking if "multinode-384000" exists ...
	I0223 17:05:09.880895   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:05:09.880910   30321 start.go:301] JoinCluster: &{Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:05:09.880969   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0223 17:05:09.881029   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:05:09.939886   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:05:10.114312   30321 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mflu20.fcb217p8h9corip6 --discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 
	I0223 17:05:10.114363   30321 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 17:05:10.114395   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mflu20.fcb217p8h9corip6 --discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-384000-m02"
	I0223 17:05:10.153989   30321 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 17:05:10.266110   30321 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0223 17:05:10.266130   30321 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0223 17:05:10.292288   30321 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:05:10.292302   30321 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:05:10.292313   30321 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 17:05:10.374526   30321 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0223 17:05:11.887197   30321 command_runner.go:130] > This node has joined the cluster:
	I0223 17:05:11.887211   30321 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0223 17:05:11.887217   30321 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0223 17:05:11.887225   30321 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0223 17:05:11.890389   30321 command_runner.go:130] ! W0224 01:05:10.153407    1235 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 17:05:11.890404   30321 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 17:05:11.890413   30321 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:05:11.890430   30321 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mflu20.fcb217p8h9corip6 --discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-384000-m02": (1.776040987s)
	I0223 17:05:11.890446   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0223 17:05:12.019786   30321 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0223 17:05:12.019816   30321 start.go:303] JoinCluster complete in 2.138928977s
	I0223 17:05:12.019830   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:05:12.019842   30321 cni.go:136] 2 nodes found, recommending kindnet
	I0223 17:05:12.019955   30321 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 17:05:12.024878   30321 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 17:05:12.024900   30321 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 17:05:12.024917   30321 command_runner.go:130] > Device: a6h/166d	Inode: 2757559     Links: 1
	I0223 17:05:12.024929   30321 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 17:05:12.024937   30321 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 17:05:12.024943   30321 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 17:05:12.024951   30321 command_runner.go:130] > Change: 2023-02-24 00:41:32.136225471 +0000
	I0223 17:05:12.024957   30321 command_runner.go:130] >  Birth: -
	I0223 17:05:12.025085   30321 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 17:05:12.025096   30321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 17:05:12.039239   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 17:05:12.225961   30321 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 17:05:12.229143   30321 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 17:05:12.230957   30321 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 17:05:12.239205   30321 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 17:05:12.245445   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:05:12.245662   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:05:12.245949   30321 round_trippers.go:463] GET https://127.0.0.1:58131/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 17:05:12.245957   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.245963   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.245969   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.248382   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.248392   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.248398   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.248404   30321 round_trippers.go:580]     Audit-Id: ea2056a3-bae1-48cb-b05a-d18528f66c75
	I0223 17:05:12.248409   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.248414   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.248419   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.248425   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.248432   30321 round_trippers.go:580]     Content-Length: 291
	I0223 17:05:12.248443   30321 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"430","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 17:05:12.248495   30321 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-384000" context rescaled to 1 replicas
	I0223 17:05:12.248511   30321 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 17:05:12.271711   30321 out.go:177] * Verifying Kubernetes components...
	I0223 17:05:12.314058   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:05:12.326008   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:05:12.386090   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:05:12.386332   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:05:12.386562   30321 node_ready.go:35] waiting up to 6m0s for node "multinode-384000-m02" to be "Ready" ...
	I0223 17:05:12.386604   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:12.386608   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.386614   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.386622   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.389431   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.389443   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.389449   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.389454   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.389459   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.389463   30321 round_trippers.go:580]     Audit-Id: bead5394-02db-4b8b-9355-a8baf5674402
	I0223 17:05:12.389468   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.389473   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.389541   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:12.389737   30321 node_ready.go:49] node "multinode-384000-m02" has status "Ready":"True"
	I0223 17:05:12.389743   30321 node_ready.go:38] duration metric: took 3.172945ms waiting for node "multinode-384000-m02" to be "Ready" ...
	I0223 17:05:12.389748   30321 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:05:12.389790   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:05:12.389795   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.389800   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.389807   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.393300   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:12.393313   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.393318   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.393324   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.393331   30321 round_trippers.go:580]     Audit-Id: 53584cab-94ca-498b-b86b-54a02b2adecb
	I0223 17:05:12.393337   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.393342   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.393350   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.394551   30321 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"477"},"items":[{"metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0223 17:05:12.396798   30321 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.396879   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nlz4z
	I0223 17:05:12.396887   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.396896   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.396904   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.399747   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.399760   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.399766   30321 round_trippers.go:580]     Audit-Id: 1c532d78-c7eb-4ef2-947a-f201f9ab9909
	I0223 17:05:12.399772   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.399777   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.399782   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.399790   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.399797   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.399860   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 17:05:12.400113   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:12.400121   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.400129   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.400137   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.402360   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.402370   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.402378   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.402391   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.402397   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.402404   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.402410   30321 round_trippers.go:580]     Audit-Id: 354377ae-0145-4719-bd52-9603b9baf89e
	I0223 17:05:12.402418   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.402669   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:12.402854   30321 pod_ready.go:92] pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:12.402861   30321 pod_ready.go:81] duration metric: took 6.045328ms waiting for pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.402868   30321 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.402898   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/etcd-multinode-384000
	I0223 17:05:12.402904   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.402910   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.402918   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.405228   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.405240   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.405248   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.405254   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.405260   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.405266   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.405273   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.405279   30321 round_trippers.go:580]     Audit-Id: 77b942b3-e5e8-4bd8-8ecf-5cd4ba94ebd4
	I0223 17:05:12.405338   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-384000","namespace":"kube-system","uid":"c892d753-c892-4834-ba6f-34c4703cfa21","resourceVersion":"266","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"657e2e903e35ddf52c4f23cc480a0a6a","kubernetes.io/config.mirror":"657e2e903e35ddf52c4f23cc480a0a6a","kubernetes.io/config.seen":"2023-02-24T01:04:26.472791839Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 17:05:12.405580   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:12.405586   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.405592   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.405601   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.407513   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:05:12.407521   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.407526   30321 round_trippers.go:580]     Audit-Id: 17744f25-8693-4281-bf89-144bfeeaf1d9
	I0223 17:05:12.407531   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.407538   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.407544   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.407548   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.407554   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.407611   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:12.407784   30321 pod_ready.go:92] pod "etcd-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:12.407789   30321 pod_ready.go:81] duration metric: took 4.917084ms waiting for pod "etcd-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.407799   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.407829   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-384000
	I0223 17:05:12.407833   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.407839   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.407845   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.409860   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.409869   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.409876   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.409881   30321 round_trippers.go:580]     Audit-Id: e5ddbe1c-92ae-40f9-9dbb-d5800989f628
	I0223 17:05:12.409887   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.409892   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.409898   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.409904   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.409983   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-384000","namespace":"kube-system","uid":"c42cb310-4d3e-44ed-aa9c-0f0bc12249d1","resourceVersion":"261","creationTimestamp":"2023-02-24T01:04:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b8de13a5a84c1ea264205bf0af6c4906","kubernetes.io/config.mirror":"b8de13a5a84c1ea264205bf0af6c4906","kubernetes.io/config.seen":"2023-02-24T01:04:17.403781278Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 17:05:12.410242   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:12.410248   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.410254   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.410260   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.412472   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.412481   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.412487   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.412492   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.412500   30321 round_trippers.go:580]     Audit-Id: c96077ff-adfc-472f-bb65-e802e9b61025
	I0223 17:05:12.412505   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.412511   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.412517   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.412582   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:12.412748   30321 pod_ready.go:92] pod "kube-apiserver-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:12.412753   30321 pod_ready.go:81] duration metric: took 4.949693ms waiting for pod "kube-apiserver-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.412759   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.412786   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-384000
	I0223 17:05:12.412791   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.412797   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.412803   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.414996   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.415009   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.415015   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.415020   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.415029   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.415035   30321 round_trippers.go:580]     Audit-Id: b9382983-4d6e-43ea-9a06-be0c6b02d42a
	I0223 17:05:12.415040   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.415045   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.415132   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-384000","namespace":"kube-system","uid":"ac83dab3-bb77-4542-9452-419c3f5087cb","resourceVersion":"264","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb6cd6332c76e8ae5bfced6be99d18bd","kubernetes.io/config.mirror":"cb6cd6332c76e8ae5bfced6be99d18bd","kubernetes.io/config.seen":"2023-02-24T01:04:26.472807208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 17:05:12.415410   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:12.415417   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.415425   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.415433   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.418632   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:12.418643   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.418652   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.418657   30321 round_trippers.go:580]     Audit-Id: 630250ed-1dce-4c98-8381-43ac62ac4a39
	I0223 17:05:12.418662   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.418667   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.418672   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.418679   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.418962   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:12.419166   30321 pod_ready.go:92] pod "kube-controller-manager-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:12.419172   30321 pod_ready.go:81] duration metric: took 6.407903ms waiting for pod "kube-controller-manager-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.419178   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q28gd" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.588005   30321 request.go:622] Waited for 168.736696ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:12.588048   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:12.588055   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.588064   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.588072   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.590978   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.590991   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.590997   30321 round_trippers.go:580]     Audit-Id: 9dc39c88-79f2-474d-8f4a-d3217a686c41
	I0223 17:05:12.591001   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.591006   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.591012   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.591017   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.591035   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.591197   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"463","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 17:05:12.786943   30321 request.go:622] Waited for 195.482229ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:12.787102   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:12.787110   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.787122   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.787133   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.791223   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:12.791242   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.791250   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.791258   30321 round_trippers.go:580]     Audit-Id: 5edde90b-f341-4547-959f-3dbfac67ca4e
	I0223 17:05:12.791266   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.791272   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.791280   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.791287   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.791367   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:13.291848   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:13.291876   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:13.291889   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:13.291898   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:13.295653   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:13.295666   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:13.295673   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:13.295681   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:13.295688   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:13 GMT
	I0223 17:05:13.295693   30321 round_trippers.go:580]     Audit-Id: 7a954963-1cce-4cd3-ab9e-3ee5a85eacad
	I0223 17:05:13.295697   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:13.295703   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:13.295763   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"463","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 17:05:13.296004   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:13.296010   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:13.296016   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:13.296030   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:13.298583   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:13.298596   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:13.298602   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:13.298617   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:13.298626   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:13.298634   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:13 GMT
	I0223 17:05:13.298646   30321 round_trippers.go:580]     Audit-Id: b3a81aa1-530a-4ff7-8cb7-67ef94eec823
	I0223 17:05:13.298661   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:13.298719   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:13.793899   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:13.793925   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:13.794018   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:13.794034   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:13.798151   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:13.798167   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:13.798175   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:13.798182   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:13.798194   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:13.798202   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:13.798209   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:13 GMT
	I0223 17:05:13.798216   30321 round_trippers.go:580]     Audit-Id: db81ceaf-56c2-4228-8379-59ca8d7862e5
	I0223 17:05:13.798306   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:13.798553   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:13.798558   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:13.798564   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:13.798576   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:13.801544   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:13.801558   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:13.801564   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:13.801569   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:13 GMT
	I0223 17:05:13.801573   30321 round_trippers.go:580]     Audit-Id: c388a579-34a7-4c6e-a4e2-4d5e6f2fe4af
	I0223 17:05:13.801577   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:13.801583   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:13.801588   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:13.801663   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:14.291809   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:14.291824   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:14.291843   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:14.291850   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:14.294441   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:14.294452   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:14.294461   30321 round_trippers.go:580]     Audit-Id: 31c1a410-41f8-4af3-ad86-b6dff5c28d10
	I0223 17:05:14.294472   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:14.294477   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:14.294482   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:14.294487   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:14.294492   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:14 GMT
	I0223 17:05:14.294547   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:14.294813   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:14.294820   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:14.294826   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:14.294831   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:14.297192   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:14.297202   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:14.297208   30321 round_trippers.go:580]     Audit-Id: d6b67bc2-a54f-46c3-8042-4d165fe8ceec
	I0223 17:05:14.297213   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:14.297217   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:14.297224   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:14.297233   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:14.297238   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:14 GMT
	I0223 17:05:14.297327   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:14.792083   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:14.792110   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:14.792123   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:14.792133   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:14.795831   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:14.795845   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:14.795850   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:14.795855   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:14.795860   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:14.795865   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:14.795870   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:14 GMT
	I0223 17:05:14.795874   30321 round_trippers.go:580]     Audit-Id: ae4a7367-d438-424c-a55f-70205c89bc50
	I0223 17:05:14.795935   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:14.796182   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:14.796189   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:14.796195   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:14.796200   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:14.798521   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:14.798533   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:14.798539   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:14.798546   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:14.798551   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:14.798556   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:14.798560   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:14 GMT
	I0223 17:05:14.798565   30321 round_trippers.go:580]     Audit-Id: d9ef0bc3-b3db-419a-8689-df682f462d3a
	I0223 17:05:14.798614   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:14.798779   30321 pod_ready.go:102] pod "kube-proxy-q28gd" in "kube-system" namespace has status "Ready":"False"
	I0223 17:05:15.291967   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:15.291992   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:15.292004   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:15.292014   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:15.296638   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:15.296653   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:15.296659   30321 round_trippers.go:580]     Audit-Id: 0f3151cd-7492-4803-84a8-a7b2593cfbff
	I0223 17:05:15.296664   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:15.296669   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:15.296673   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:15.296679   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:15.296685   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:15 GMT
	I0223 17:05:15.296747   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:15.297015   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:15.297021   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:15.297027   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:15.297033   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:15.299045   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:15.299054   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:15.299062   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:15.299068   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:15.299073   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:15.299078   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:15.299082   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:15 GMT
	I0223 17:05:15.299088   30321 round_trippers.go:580]     Audit-Id: 0f8438de-cf80-4e2b-9473-ece53f15e408
	I0223 17:05:15.299130   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:15.791770   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:15.791788   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:15.791797   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:15.791807   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:15.794876   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:15.794891   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:15.794904   30321 round_trippers.go:580]     Audit-Id: 2dc876d7-92fb-46d1-a597-ef5b39be1b87
	I0223 17:05:15.794914   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:15.794925   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:15.794933   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:15.794956   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:15.794971   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:15 GMT
	I0223 17:05:15.795069   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:15.795341   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:15.795348   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:15.795354   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:15.795364   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:15.797912   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:15.797928   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:15.797934   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:15.797940   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:15.797945   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:15 GMT
	I0223 17:05:15.797950   30321 round_trippers.go:580]     Audit-Id: e841ef89-e713-42a9-83d2-c30b3af5214e
	I0223 17:05:15.797956   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:15.797961   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:15.798010   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:16.292138   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:16.292164   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:16.292176   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:16.292186   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:16.296455   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:16.296470   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:16.296484   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:16.296492   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:16.296498   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:16 GMT
	I0223 17:05:16.296506   30321 round_trippers.go:580]     Audit-Id: 9bbc4e9b-6ca9-451a-92be-1ee9c8728ba4
	I0223 17:05:16.296514   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:16.296519   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:16.296698   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:16.296951   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:16.296957   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:16.296963   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:16.296969   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:16.298985   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:16.298995   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:16.299001   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:16.299007   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:16 GMT
	I0223 17:05:16.299012   30321 round_trippers.go:580]     Audit-Id: 53e1ff72-fd22-42e4-9732-2f5da00ce66f
	I0223 17:05:16.299018   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:16.299023   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:16.299028   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:16.299075   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:16.791962   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:16.791995   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:16.792008   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:16.792017   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:16.795678   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:16.795690   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:16.795696   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:16.795702   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:16.795709   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:16.795719   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:16.795729   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:16 GMT
	I0223 17:05:16.795736   30321 round_trippers.go:580]     Audit-Id: 1c3d2a12-85c9-4fc8-9114-55cdee49a74b
	I0223 17:05:16.795922   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:16.796245   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:16.796253   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:16.796260   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:16.796265   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:16.798495   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:16.798507   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:16.798513   30321 round_trippers.go:580]     Audit-Id: 75c69917-1f17-4c7d-a4cb-1dce5c4ecefd
	I0223 17:05:16.798519   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:16.798524   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:16.798529   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:16.798533   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:16.798539   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:16 GMT
	I0223 17:05:16.798579   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:17.293215   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:17.293284   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.293299   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.293311   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.297399   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:17.297413   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.297422   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.297433   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.297441   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.297447   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.297455   30321 round_trippers.go:580]     Audit-Id: e64aab21-337d-4605-9232-31a915e3e8f7
	I0223 17:05:17.297463   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.297537   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"490","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 17:05:17.297837   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:17.297843   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.297849   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.297855   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.299815   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:05:17.299824   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.299829   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.299835   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.299840   30321 round_trippers.go:580]     Audit-Id: 489a7480-0712-4769-8bae-e4b3dfdd2940
	I0223 17:05:17.299844   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.299850   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.299854   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.299905   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:17.300058   30321 pod_ready.go:92] pod "kube-proxy-q28gd" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:17.300067   30321 pod_ready.go:81] duration metric: took 4.880938879s waiting for pod "kube-proxy-q28gd" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.300079   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wmsxr" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.300122   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-wmsxr
	I0223 17:05:17.300128   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.300134   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.300140   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.302739   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:17.302752   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.302761   30321 round_trippers.go:580]     Audit-Id: ee5468e2-2ffa-4ec5-8128-ee2e63879f80
	I0223 17:05:17.302790   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.302801   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.302809   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.302816   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.302824   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.302951   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wmsxr","generateName":"kube-proxy-","namespace":"kube-system","uid":"6d046618-e274-4a16-8846-14837962c18d","resourceVersion":"391","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 17:05:17.303293   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:17.303304   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.303313   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.303320   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.306226   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:17.306237   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.306243   30321 round_trippers.go:580]     Audit-Id: 8ef671db-d50b-4f6e-80b9-e6ffe1659f1f
	I0223 17:05:17.306248   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.306254   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.306258   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.306263   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.306268   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.306320   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:17.306506   30321 pod_ready.go:92] pod "kube-proxy-wmsxr" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:17.306512   30321 pod_ready.go:81] duration metric: took 6.418198ms waiting for pod "kube-proxy-wmsxr" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.306517   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.306552   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-384000
	I0223 17:05:17.306556   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.306562   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.306569   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.308591   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:17.308601   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.308607   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.308612   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.308618   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.308623   30321 round_trippers.go:580]     Audit-Id: 5d223e83-7185-4965-a56b-f6c3be8f5bff
	I0223 17:05:17.308627   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.308632   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.308685   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-384000","namespace":"kube-system","uid":"f914009d-3787-433d-8e3e-2f597d741c7e","resourceVersion":"279","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7ca5a1853e73c27545a20428af78eb37","kubernetes.io/config.mirror":"7ca5a1853e73c27545a20428af78eb37","kubernetes.io/config.seen":"2023-02-24T01:04:26.472807884Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 17:05:17.308908   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:17.308914   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.308920   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.308926   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.310802   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:05:17.310814   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.310820   30321 round_trippers.go:580]     Audit-Id: a6419459-b53e-4e20-bfde-cc0d2020cf8e
	I0223 17:05:17.310825   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.310830   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.310834   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.310839   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.310843   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.310906   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:17.311107   30321 pod_ready.go:92] pod "kube-scheduler-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:17.311114   30321 pod_ready.go:81] duration metric: took 4.591821ms waiting for pod "kube-scheduler-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.311120   30321 pod_ready.go:38] duration metric: took 4.921421097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:05:17.311132   30321 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 17:05:17.311189   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:05:17.321070   30321 system_svc.go:56] duration metric: took 9.933593ms WaitForService to wait for kubelet.
	I0223 17:05:17.321083   30321 kubeadm.go:578] duration metric: took 5.072613007s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 17:05:17.321097   30321 node_conditions.go:102] verifying NodePressure condition ...
	I0223 17:05:17.386575   30321 request.go:622] Waited for 65.441869ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/nodes
	I0223 17:05:17.386620   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes
	I0223 17:05:17.386625   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.386637   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.386644   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.389374   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:17.389385   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.389392   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.389399   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.389406   30321 round_trippers.go:580]     Audit-Id: 71aace31-8e13-4764-b7ab-2975d451c1ca
	I0223 17:05:17.389411   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.389423   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.389432   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.389682   30321 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10175 chars]
	I0223 17:05:17.390006   30321 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:05:17.390014   30321 node_conditions.go:123] node cpu capacity is 6
	I0223 17:05:17.390028   30321 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:05:17.390033   30321 node_conditions.go:123] node cpu capacity is 6
	I0223 17:05:17.390036   30321 node_conditions.go:105] duration metric: took 68.936647ms to run NodePressure ...
	I0223 17:05:17.390044   30321 start.go:228] waiting for startup goroutines ...
	I0223 17:05:17.390068   30321 start.go:242] writing updated cluster config ...
	I0223 17:05:17.390377   30321 ssh_runner.go:195] Run: rm -f paused
	I0223 17:05:17.429574   30321 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 17:05:17.474432   30321 out.go:177] * Done! kubectl is now configured to use "multinode-384000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 01:04:08 UTC, end at Fri 2023-02-24 01:05:24 UTC. --
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522327381Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522353322Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522364554Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522382213Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522398747Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522429785Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522444776Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522462933Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522516501Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522901407Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522944238Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.523375575Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.531192511Z" level=info msg="Loading containers: start."
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.609257090Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.642489297Z" level=info msg="Loading containers: done."
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.650852281Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.650917985Z" level=info msg="Daemon has completed initialization"
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.671691797Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 01:04:12 multinode-384000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.675903166Z" level=info msg="API listen on [::]:2376"
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.682172168Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 24 01:04:54 multinode-384000 dockerd[831]: time="2023-02-24T01:04:54.002839887Z" level=info msg="ignoring event" container=e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:04:54 multinode-384000 dockerd[831]: time="2023-02-24T01:04:54.108713449Z" level=info msg="ignoring event" container=e3eb3324627b8d0da5874eff0e3736635555f1bfb77aa0e7e7ab4e3fbcfd5c95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:04:54 multinode-384000 dockerd[831]: time="2023-02-24T01:04:54.507268499Z" level=info msg="ignoring event" container=589074bbb37e751b1a2f17d08fcdfbbb9bf359c05d004737b94812bae43849d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:04:54 multinode-384000 dockerd[831]: time="2023-02-24T01:04:54.571292264Z" level=info msg="ignoring event" container=2a7643eec8796295af038e8573f1fe0f86f8f67946fcfd0db1d2a56f86e4dda3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	c2ba1f041ba50       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 seconds ago        Running             busybox                   0                   b17d7606007b6
	e76713fd6c03d       5185b96f0becf                                                                                         30 seconds ago       Running             coredns                   1                   f60fc43795c36
	5ce4fb65e4b85       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              41 seconds ago       Running             kindnet-cni               0                   06e2e420c3204
	883f8aa15acf4       6e38f40d628db                                                                                         43 seconds ago       Running             storage-provisioner       0                   53fa26659a4a9
	589074bbb37e7       5185b96f0becf                                                                                         43 seconds ago       Exited              coredns                   0                   2a7643eec8796
	5436fde5aabd2       46a6bb3c77ce0                                                                                         44 seconds ago       Running             kube-proxy                0                   a450caec62335
	459a8621d90b3       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   59322436f6077
	2e5771ae72b9c       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   bf54c4d8eb0c2
	cc1df7eeb82a3       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   3d852f0cc313d
	624942233c6b0       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   9d79d87a7a44a
	
	* 
	* ==> coredns [589074bbb37e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] 127.0.0.1:39210 - 24476 "HINFO IN 3769487758298892341.1515843376402541889. udp 57 false 512" - - 0 5.000074219s
	[ERROR] plugin/errors: 2 3769487758298892341.1515843376402541889. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[INFO] 127.0.0.1:53486 - 52777 "HINFO IN 3769487758298892341.1515843376402541889. udp 57 false 512" - - 0 5.000080673s
	[ERROR] plugin/errors: 2 3769487758298892341.1515843376402541889. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> coredns [e76713fd6c03] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:56540 - 12605 "HINFO IN 7526566672075535390.5503095815732551325. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016087575s
	[INFO] 10.244.0.3:44285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176842s
	[INFO] 10.244.0.3:41450 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046848152s
	[INFO] 10.244.0.3:40310 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003908724s
	[INFO] 10.244.0.3:45364 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.01223335s
	[INFO] 10.244.0.3:39652 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102391s
	[INFO] 10.244.0.3:45398 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00484566s
	[INFO] 10.244.0.3:51203 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098429s
	[INFO] 10.244.0.3:40689 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141606s
	[INFO] 10.244.0.3:51539 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004171789s
	[INFO] 10.244.0.3:46748 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200533s
	[INFO] 10.244.0.3:32832 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129906s
	[INFO] 10.244.0.3:40264 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134742s
	[INFO] 10.244.0.3:43870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121786s
	[INFO] 10.244.0.3:36731 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076301s
	[INFO] 10.244.0.3:60911 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067362s
	[INFO] 10.244.0.3:45359 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105212s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-384000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-384000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510
	                    minikube.k8s.io/name=multinode-384000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T17_04_27_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:04:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-384000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:05:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:04:57 +0000   Fri, 24 Feb 2023 01:04:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:04:57 +0000   Fri, 24 Feb 2023 01:04:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:04:57 +0000   Fri, 24 Feb 2023 01:04:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:04:57 +0000   Fri, 24 Feb 2023 01:04:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-384000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    57e18f70-d77e-4b45-ae15-597714d7865f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-vb76c                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-787d4945fb-nlz4z                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     46s
	  kube-system                 etcd-multinode-384000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 kindnet-n4mpj                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      46s
	  kube-system                 kube-apiserver-multinode-384000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-controller-manager-multinode-384000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-proxy-wmsxr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kube-scheduler-multinode-384000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 44s                kube-proxy       
	  Normal  NodeHasSufficientMemory  68s (x4 over 68s)  kubelet          Node multinode-384000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x4 over 68s)  kubelet          Node multinode-384000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x3 over 68s)  kubelet          Node multinode-384000 status is now: NodeHasSufficientPID
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  59s                kubelet          Node multinode-384000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s                kubelet          Node multinode-384000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s                kubelet          Node multinode-384000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node multinode-384000 event: Registered Node multinode-384000 in Controller
	
	
	Name:               multinode-384000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-384000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:05:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-384000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:05:11 +0000   Fri, 24 Feb 2023 01:05:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:05:11 +0000   Fri, 24 Feb 2023 01:05:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:05:11 +0000   Fri, 24 Feb 2023 01:05:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:05:11 +0000   Fri, 24 Feb 2023 01:05:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-384000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    57e18f70-d77e-4b45-ae15-597714d7865f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-nlclz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-2g647               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      14s
	  kube-system                 kube-proxy-q28gd            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  Starting                 15s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15s (x2 over 15s)  kubelet          Node multinode-384000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15s (x2 over 15s)  kubelet          Node multinode-384000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15s (x2 over 15s)  kubelet          Node multinode-384000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14s                kubelet          Node multinode-384000-m02 status is now: NodeReady
	  Normal  RegisteredNode           12s                node-controller  Node multinode-384000-m02 event: Registered Node multinode-384000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000095] FS-Cache: O-key=[8] 'c95bc40400000000'
	[  +0.000057] FS-Cache: N-cookie c=0000001c [p=00000014 fl=2 nc=0 na=1]
	[  +0.000081] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000007e1c140
	[  +0.000047] FS-Cache: N-key=[8] 'c95bc40400000000'
	[  +0.003060] FS-Cache: Duplicate cookie detected
	[  +0.000048] FS-Cache: O-cookie c=00000016 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000069] FS-Cache: O-cookie d=000000008a09309a{9p.inode} n=00000000934117af
	[  +0.000038] FS-Cache: O-key=[8] 'c95bc40400000000'
	[  +0.000044] FS-Cache: N-cookie c=0000001d [p=00000014 fl=2 nc=0 na=1]
	[  +0.000058] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000099c779f3
	[  +0.000182] FS-Cache: N-key=[8] 'c95bc40400000000'
	[  +3.488321] FS-Cache: Duplicate cookie detected
	[  +0.000062] FS-Cache: O-cookie c=00000017 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000055] FS-Cache: O-cookie d=000000008a09309a{9p.inode} n=000000004b032eea
	[  +0.000065] FS-Cache: O-key=[8] 'c85bc40400000000'
	[  +0.000035] FS-Cache: N-cookie c=00000020 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000042] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000047d46db5
	[  +0.000055] FS-Cache: N-key=[8] 'c85bc40400000000'
	[  +0.398634] FS-Cache: Duplicate cookie detected
	[  +0.000091] FS-Cache: O-cookie c=0000001a [p=00000014 fl=226 nc=0 na=1]
	[  +0.000050] FS-Cache: O-cookie d=000000008a09309a{9p.inode} n=000000004a75bbd2
	[  +0.000047] FS-Cache: O-key=[8] 'd35bc40400000000'
	[  +0.000054] FS-Cache: N-cookie c=00000021 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000051] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000062f74fb0
	[  +0.000064] FS-Cache: N-key=[8] 'd35bc40400000000'
	
	* 
	* ==> etcd [459a8621d90b] <==
	* {"level":"info","ts":"2023-02-24T01:04:21.479Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-02-24T01:04:21.479Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-24T01:04:21.480Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T01:04:21.480Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-24T01:04:21.480Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T01:04:21.480Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T01:04:22.275Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-384000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T01:04:22.275Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:04:22.276Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:04:22.276Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:04:22.276Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T01:04:22.276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T01:05:01.901Z","caller":"traceutil/trace.go:171","msg":"trace[1455928292] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"185.969568ms","start":"2023-02-24T01:05:01.715Z","end":"2023-02-24T01:05:01.901Z","steps":["trace[1455928292] 'process raft request'  (duration: 185.861017ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:05:25 up  2:04,  0 users,  load average: 1.10, 1.09, 0.98
	Linux multinode-384000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [5ce4fb65e4b8] <==
	* I0224 01:04:44.150027       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0224 01:04:44.150136       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0224 01:04:44.150278       1 main.go:116] setting mtu 1500 for CNI 
	I0224 01:04:44.150425       1 main.go:146] kindnetd IP family: "ipv4"
	I0224 01:04:44.150467       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0224 01:04:44.850809       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:04:44.850892       1 main.go:227] handling current node
	I0224 01:04:54.864771       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:04:54.864817       1 main.go:227] handling current node
	I0224 01:05:04.876589       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:05:04.876634       1 main.go:227] handling current node
	I0224 01:05:14.880019       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:05:14.880056       1 main.go:227] handling current node
	I0224 01:05:14.880064       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 01:05:14.880068       1 main.go:250] Node multinode-384000-m02 has CIDR [10.244.1.0/24] 
	I0224 01:05:14.880165       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0224 01:05:24.891835       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:05:24.891899       1 main.go:227] handling current node
	I0224 01:05:24.891907       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 01:05:24.891914       1 main.go:250] Node multinode-384000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [cc1df7eeb82a] <==
	* I0224 01:04:23.413038       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 01:04:23.429505       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 01:04:23.429770       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 01:04:23.429785       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 01:04:23.429879       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 01:04:23.430017       1 cache.go:39] Caches are synced for autoregister controller
	I0224 01:04:23.430082       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 01:04:23.430322       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 01:04:23.430497       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 01:04:24.153495       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 01:04:24.334068       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0224 01:04:24.337358       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0224 01:04:24.337394       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 01:04:24.954618       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 01:04:24.983382       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 01:04:25.071632       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0224 01:04:25.076084       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0224 01:04:25.076718       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 01:04:25.079989       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 01:04:25.360436       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 01:04:26.364333       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 01:04:26.372607       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0224 01:04:26.378726       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 01:04:39.549644       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0224 01:04:39.698191       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [2e5771ae72b9] <==
	* I0224 01:04:38.904438       1 shared_informer.go:280] Caches are synced for expand
	I0224 01:04:38.927086       1 shared_informer.go:280] Caches are synced for stateful set
	I0224 01:04:38.944817       1 shared_informer.go:280] Caches are synced for disruption
	I0224 01:04:38.945939       1 shared_informer.go:280] Caches are synced for attach detach
	I0224 01:04:38.951044       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 01:04:39.316625       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 01:04:39.371785       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 01:04:39.371824       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0224 01:04:39.553572       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0224 01:04:39.574914       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0224 01:04:39.759545       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wmsxr"
	I0224 01:04:39.759560       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n4mpj"
	I0224 01:04:39.859885       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-bvdps"
	I0224 01:04:39.865008       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-nlz4z"
	I0224 01:04:39.880539       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-bvdps"
	W0224 01:05:11.126611       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-384000-m02" does not exist
	I0224 01:05:11.130038       1 range_allocator.go:372] Set node multinode-384000-m02 PodCIDR to [10.244.1.0/24]
	I0224 01:05:11.133246       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2g647"
	I0224 01:05:11.133577       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q28gd"
	W0224 01:05:11.739697       1 topologycache.go:232] Can't get CPU or zone information for multinode-384000-m02 node
	W0224 01:05:13.799497       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-384000-m02. Assuming now as a timestamp.
	I0224 01:05:13.799689       1 event.go:294] "Event occurred" object="multinode-384000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-384000-m02 event: Registered Node multinode-384000-m02 in Controller"
	I0224 01:05:18.590187       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0224 01:05:18.597761       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-nlclz"
	I0224 01:05:18.625598       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-vb76c"
	
	* 
	* ==> kube-proxy [5436fde5aabd] <==
	* I0224 01:04:40.782386       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0224 01:04:40.782470       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0224 01:04:40.782523       1 server_others.go:535] "Using iptables proxy"
	I0224 01:04:40.861005       1 server_others.go:176] "Using iptables Proxier"
	I0224 01:04:40.861028       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0224 01:04:40.861033       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0224 01:04:40.861048       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0224 01:04:40.861070       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0224 01:04:40.861404       1 server.go:655] "Version info" version="v1.26.1"
	I0224 01:04:40.861415       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:04:40.862241       1 config.go:226] "Starting endpoint slice config controller"
	I0224 01:04:40.862257       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0224 01:04:40.862272       1 config.go:317] "Starting service config controller"
	I0224 01:04:40.862276       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0224 01:04:40.862327       1 config.go:444] "Starting node config controller"
	I0224 01:04:40.862336       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0224 01:04:40.963446       1 shared_informer.go:280] Caches are synced for service config
	I0224 01:04:40.963446       1 shared_informer.go:280] Caches are synced for node config
	I0224 01:04:40.963460       1 shared_informer.go:280] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [624942233c6b] <==
	* W0224 01:04:23.370709       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 01:04:23.370837       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0224 01:04:24.271302       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0224 01:04:24.271362       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0224 01:04:24.346347       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0224 01:04:24.346368       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 01:04:24.469128       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0224 01:04:24.469174       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0224 01:04:24.497758       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0224 01:04:24.497816       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0224 01:04:24.511111       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 01:04:24.511150       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0224 01:04:24.542675       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0224 01:04:24.542773       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0224 01:04:24.543275       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0224 01:04:24.543371       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0224 01:04:24.651473       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0224 01:04:24.651639       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0224 01:04:24.728069       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 01:04:24.728117       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0224 01:04:24.773574       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0224 01:04:24.773622       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0224 01:04:24.781643       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 01:04:24.781722       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0224 01:04:26.565140       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 01:04:08 UTC, end at Fri 2023-02-24 01:05:26 UTC. --
	Feb 24 01:04:42 multinode-384000 kubelet[2244]: I0224 01:04:42.178563    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-bvdps" podStartSLOduration=3.178535392 pod.CreationTimestamp="2023-02-24 01:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:41.778935882 +0000 UTC m=+15.429274901" watchObservedRunningTime="2023-02-24 01:04:42.178535392 +0000 UTC m=+15.828874406"
	Feb 24 01:04:42 multinode-384000 kubelet[2244]: I0224 01:04:42.576503    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wmsxr" podStartSLOduration=3.57647565 pod.CreationTimestamp="2023-02-24 01:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:42.178760745 +0000 UTC m=+15.829099755" watchObservedRunningTime="2023-02-24 01:04:42.57647565 +0000 UTC m=+16.226814665"
	Feb 24 01:04:42 multinode-384000 kubelet[2244]: I0224 01:04:42.576726    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.576711978 pod.CreationTimestamp="2023-02-24 01:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:42.576541052 +0000 UTC m=+16.226880071" watchObservedRunningTime="2023-02-24 01:04:42.576711978 +0000 UTC m=+16.227050995"
	Feb 24 01:04:44 multinode-384000 kubelet[2244]: I0224 01:04:44.270040    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-nlz4z" podStartSLOduration=5.27001081 pod.CreationTimestamp="2023-02-24 01:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:42.980196679 +0000 UTC m=+16.630535702" watchObservedRunningTime="2023-02-24 01:04:44.27001081 +0000 UTC m=+17.920349830"
	Feb 24 01:04:47 multinode-384000 kubelet[2244]: I0224 01:04:47.301450    2244 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 01:04:47 multinode-384000 kubelet[2244]: I0224 01:04:47.350191    2244 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.299113    2244 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spt2b\" (UniqueName: \"kubernetes.io/projected/5e0ff7c6-6e83-42f4-bcd9-47d435925027-kube-api-access-spt2b\") pod \"5e0ff7c6-6e83-42f4-bcd9-47d435925027\" (UID: \"5e0ff7c6-6e83-42f4-bcd9-47d435925027\") "
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.299184    2244 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0ff7c6-6e83-42f4-bcd9-47d435925027-config-volume\") pod \"5e0ff7c6-6e83-42f4-bcd9-47d435925027\" (UID: \"5e0ff7c6-6e83-42f4-bcd9-47d435925027\") "
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: W0224 01:04:54.299318    2244 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/5e0ff7c6-6e83-42f4-bcd9-47d435925027/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.299431    2244 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0ff7c6-6e83-42f4-bcd9-47d435925027-config-volume" (OuterVolumeSpecName: "config-volume") pod "5e0ff7c6-6e83-42f4-bcd9-47d435925027" (UID: "5e0ff7c6-6e83-42f4-bcd9-47d435925027"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.300811    2244 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0ff7c6-6e83-42f4-bcd9-47d435925027-kube-api-access-spt2b" (OuterVolumeSpecName: "kube-api-access-spt2b") pod "5e0ff7c6-6e83-42f4-bcd9-47d435925027" (UID: "5e0ff7c6-6e83-42f4-bcd9-47d435925027"). InnerVolumeSpecName "kube-api-access-spt2b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.361015    2244 scope.go:115] "RemoveContainer" containerID="e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.369479    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-n4mpj" podStartSLOduration=-9.223372021485348e+09 pod.CreationTimestamp="2023-02-24 01:04:39 +0000 UTC" firstStartedPulling="2023-02-24 01:04:41.157383491 +0000 UTC m=+14.807722502" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:44.27028833 +0000 UTC m=+17.920627340" watchObservedRunningTime="2023-02-24 01:04:54.369427954 +0000 UTC m=+28.020139251"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.372144    2244 scope.go:115] "RemoveContainer" containerID="e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: E0224 01:04:54.372923    2244 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad" containerID="e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.372957    2244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad} err="failed to get container status \"e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad\": rpc error: code = Unknown desc = Error: No such container: e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.400182    2244 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-spt2b\" (UniqueName: \"kubernetes.io/projected/5e0ff7c6-6e83-42f4-bcd9-47d435925027-kube-api-access-spt2b\") on node \"multinode-384000\" DevicePath \"\""
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.400230    2244 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0ff7c6-6e83-42f4-bcd9-47d435925027-config-volume\") on node \"multinode-384000\" DevicePath \"\""
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.573477    2244 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5e0ff7c6-6e83-42f4-bcd9-47d435925027 path="/var/lib/kubelet/pods/5e0ff7c6-6e83-42f4-bcd9-47d435925027/volumes"
	Feb 24 01:04:55 multinode-384000 kubelet[2244]: I0224 01:04:55.377637    2244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7643eec8796295af038e8573f1fe0f86f8f67946fcfd0db1d2a56f86e4dda3"
	Feb 24 01:05:18 multinode-384000 kubelet[2244]: I0224 01:05:18.629059    2244 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 01:05:18 multinode-384000 kubelet[2244]: E0224 01:05:18.629106    2244 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5e0ff7c6-6e83-42f4-bcd9-47d435925027" containerName="coredns"
	Feb 24 01:05:18 multinode-384000 kubelet[2244]: I0224 01:05:18.629135    2244 memory_manager.go:346] "RemoveStaleState removing state" podUID="5e0ff7c6-6e83-42f4-bcd9-47d435925027" containerName="coredns"
	Feb 24 01:05:18 multinode-384000 kubelet[2244]: I0224 01:05:18.763300    2244 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrn7x\" (UniqueName: \"kubernetes.io/projected/1a4a3aef-ff8d-45d3-9b2b-c661e7ee02af-kube-api-access-rrn7x\") pod \"busybox-6b86dd6d48-vb76c\" (UID: \"1a4a3aef-ff8d-45d3-9b2b-c661e7ee02af\") " pod="default/busybox-6b86dd6d48-vb76c"
	Feb 24 01:05:21 multinode-384000 kubelet[2244]: I0224 01:05:21.548230    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-vb76c" podStartSLOduration=-9.223372033306572e+09 pod.CreationTimestamp="2023-02-24 01:05:18 +0000 UTC" firstStartedPulling="2023-02-24 01:05:19.229822209 +0000 UTC m=+52.880533502" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:05:21.54804816 +0000 UTC m=+55.198759460" watchObservedRunningTime="2023-02-24 01:05:21.54820407 +0000 UTC m=+55.198915371"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-384000 -n multinode-384000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-384000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (8.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-nlclz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: minikube host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-nlclz -- sh -c "ping -c 1 <nil>"
multinode_test.go:558: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-nlclz -- sh -c "ping -c 1 <nil>": exit status 2 (155.966799ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:559: Failed to ping host (<nil>) from pod (busybox-6b86dd6d48-nlclz): exit status 2
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-vb76c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-384000 -- exec busybox-6b86dd6d48-vb76c -- sh -c "ping -c 1 192.168.65.2"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-384000
helpers_test.go:235: (dbg) docker inspect multinode-384000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f",
	        "Created": "2023-02-24T01:04:08.197937871Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477589,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:04:08.49208617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f/hostname",
	        "HostsPath": "/var/lib/docker/containers/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f/hosts",
	        "LogPath": "/var/lib/docker/containers/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f/434de70f5e1df1ce8a47bf9ccff12621064020e4526c72fcfeaac1c99ac1ca8f-json.log",
	        "Name": "/multinode-384000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-384000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-384000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2ca9e8bd08e5423cbe44bf442faf8eb440251ee91cdfc988e0be7ca4657e0aae-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ca9e8bd08e5423cbe44bf442faf8eb440251ee91cdfc988e0be7ca4657e0aae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ca9e8bd08e5423cbe44bf442faf8eb440251ee91cdfc988e0be7ca4657e0aae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ca9e8bd08e5423cbe44bf442faf8eb440251ee91cdfc988e0be7ca4657e0aae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-384000",
	                "Source": "/var/lib/docker/volumes/multinode-384000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-384000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-384000",
	                "name.minikube.sigs.k8s.io": "multinode-384000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "642c5d802f1b22b0b186cad96b3c3fed8b1a2ff3eb4af30f670a680efb442204",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58127"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58128"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58131"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/642c5d802f1b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-384000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "434de70f5e1d",
	                        "multinode-384000"
	                    ],
	                    "NetworkID": "0f1b3c4ce23f6eca8d55fa599b0450c172159cdf542356f741631ebb970a9e73",
	                    "EndpointID": "941bad6466f325e3d767afe0a34b3a34138d06b2d8d09ede556fb2aa2d4fa5d1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-384000 -n multinode-384000
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-384000 logs -n 25: (2.402241469s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-502000                           | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-502000 ssh -- ls                    | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-491000                           | mount-start-1-491000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-502000 ssh -- ls                    | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-502000                           | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| start   | -p mount-start-2-502000                           | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| ssh     | mount-start-2-502000 ssh -- ls                    | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-502000                           | mount-start-2-502000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| delete  | -p mount-start-1-491000                           | mount-start-1-491000 | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:03 PST |
	| start   | -p multinode-384000                               | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:03 PST | 23 Feb 23 17:05 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- apply -f                   | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- rollout                    | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- get pods -o                | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- get pods -o                | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST |                     |
	|         | busybox-6b86dd6d48-nlclz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | busybox-6b86dd6d48-vb76c --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST |                     |
	|         | busybox-6b86dd6d48-nlclz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | busybox-6b86dd6d48-vb76c --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST |                     |
	|         | busybox-6b86dd6d48-nlclz -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | busybox-6b86dd6d48-vb76c -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- get pods -o                | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | busybox-6b86dd6d48-nlclz                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST |                     |
	|         | busybox-6b86dd6d48-nlclz -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 <nil>                                |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | busybox-6b86dd6d48-vb76c                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-384000 -- exec                       | multinode-384000     | jenkins | v1.29.0 | 23 Feb 23 17:05 PST | 23 Feb 23 17:05 PST |
	|         | busybox-6b86dd6d48-vb76c -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.65.2                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 17:03:59
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 17:03:59.656062   30321 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:03:59.656244   30321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:03:59.656249   30321 out.go:309] Setting ErrFile to fd 2...
	I0223 17:03:59.656253   30321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:03:59.656363   30321 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:03:59.657822   30321 out.go:303] Setting JSON to false
	I0223 17:03:59.677236   30321 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7414,"bootTime":1677193225,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 17:03:59.677314   30321 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 17:03:59.699081   30321 out.go:177] * [multinode-384000] minikube v1.29.0 on Darwin 13.2
	I0223 17:03:59.720119   30321 notify.go:220] Checking for updates...
	I0223 17:03:59.742107   30321 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 17:03:59.763875   30321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:03:59.806984   30321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 17:03:59.827952   30321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 17:03:59.849179   30321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 17:03:59.892980   30321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 17:03:59.914339   30321 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 17:03:59.979719   30321 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 17:03:59.979843   30321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:04:00.126028   30321 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 01:04:00.031285978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:04:00.169633   30321 out.go:177] * Using the docker driver based on user configuration
	I0223 17:04:00.191172   30321 start.go:296] selected driver: docker
	I0223 17:04:00.191198   30321 start.go:857] validating driver "docker" against <nil>
	I0223 17:04:00.191216   30321 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 17:04:00.195103   30321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:04:00.339690   30321 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 01:04:00.245255337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:04:00.339817   30321 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 17:04:00.339997   30321 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 17:04:00.362872   30321 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 17:04:00.384274   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:04:00.384302   30321 cni.go:136] 0 nodes found, recommending kindnet
	I0223 17:04:00.384314   30321 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0223 17:04:00.384339   30321 start_flags.go:319] config:
	{Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:04:00.427856   30321 out.go:177] * Starting control plane node multinode-384000 in cluster multinode-384000
	I0223 17:04:00.449100   30321 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:04:00.469974   30321 out.go:177] * Pulling base image ...
	I0223 17:04:00.512032   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:00.512062   30321 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:04:00.512131   30321 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 17:04:00.512153   30321 cache.go:57] Caching tarball of preloaded images
	I0223 17:04:00.512364   30321 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:04:00.512382   30321 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 17:04:00.514626   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:04:00.514674   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json: {Name:mk35965080677d4155364ecaf1133902c945959b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:00.569017   30321 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:04:00.569047   30321 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:04:00.569068   30321 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:04:00.569117   30321 start.go:364] acquiring machines lock for multinode-384000: {Name:mk710a8f130795841106a8d589daddf1c49570ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:04:00.569263   30321 start.go:368] acquired machines lock for "multinode-384000" in 134.003µs
	I0223 17:04:00.569292   30321 start.go:93] Provisioning new machine with config: &{Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 17:04:00.569374   30321 start.go:125] createHost starting for "" (driver="docker")
	I0223 17:04:00.612785   30321 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 17:04:00.613202   30321 start.go:159] libmachine.API.Create for "multinode-384000" (driver="docker")
	I0223 17:04:00.613245   30321 client.go:168] LocalClient.Create starting
	I0223 17:04:00.613418   30321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem
	I0223 17:04:00.613500   30321 main.go:141] libmachine: Decoding PEM data...
	I0223 17:04:00.613531   30321 main.go:141] libmachine: Parsing certificate...
	I0223 17:04:00.613646   30321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem
	I0223 17:04:00.613708   30321 main.go:141] libmachine: Decoding PEM data...
	I0223 17:04:00.613725   30321 main.go:141] libmachine: Parsing certificate...
	I0223 17:04:00.614590   30321 cli_runner.go:164] Run: docker network inspect multinode-384000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 17:04:00.669024   30321 cli_runner.go:211] docker network inspect multinode-384000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 17:04:00.669115   30321 network_create.go:281] running [docker network inspect multinode-384000] to gather additional debugging logs...
	I0223 17:04:00.669129   30321 cli_runner.go:164] Run: docker network inspect multinode-384000
	W0223 17:04:00.724226   30321 cli_runner.go:211] docker network inspect multinode-384000 returned with exit code 1
	I0223 17:04:00.724253   30321 network_create.go:284] error running [docker network inspect multinode-384000]: docker network inspect multinode-384000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-384000
	I0223 17:04:00.724265   30321 network_create.go:286] output of [docker network inspect multinode-384000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-384000
	
	** /stderr **
	I0223 17:04:00.724343   30321 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 17:04:00.779800   30321 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 17:04:00.780129   30321 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d78b20}
	I0223 17:04:00.780143   30321 network_create.go:123] attempt to create docker network multinode-384000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 17:04:00.780222   30321 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-384000 multinode-384000
	I0223 17:04:00.868500   30321 network_create.go:107] docker network multinode-384000 192.168.58.0/24 created
	I0223 17:04:00.868530   30321 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-384000" container
	I0223 17:04:00.868639   30321 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 17:04:00.923921   30321 cli_runner.go:164] Run: docker volume create multinode-384000 --label name.minikube.sigs.k8s.io=multinode-384000 --label created_by.minikube.sigs.k8s.io=true
	I0223 17:04:00.978302   30321 oci.go:103] Successfully created a docker volume multinode-384000
	I0223 17:04:00.978432   30321 cli_runner.go:164] Run: docker run --rm --name multinode-384000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-384000 --entrypoint /usr/bin/test -v multinode-384000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 17:04:01.409854   30321 oci.go:107] Successfully prepared a docker volume multinode-384000
	I0223 17:04:01.409887   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:01.409901   30321 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 17:04:01.410012   30321 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-384000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 17:04:08.003538   30321 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-384000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.593535098s)
	I0223 17:04:08.003559   30321 kic.go:199] duration metric: took 6.593731 seconds to extract preloaded images to volume
	I0223 17:04:08.003681   30321 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 17:04:08.144725   30321 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-384000 --name multinode-384000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-384000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-384000 --network multinode-384000 --ip 192.168.58.2 --volume multinode-384000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 17:04:08.499616   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Running}}
	I0223 17:04:08.563566   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:08.631703   30321 cli_runner.go:164] Run: docker exec multinode-384000 stat /var/lib/dpkg/alternatives/iptables
	I0223 17:04:08.753389   30321 oci.go:144] the created container "multinode-384000" has a running status.
	I0223 17:04:08.753436   30321 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa...
	I0223 17:04:08.897712   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 17:04:08.897783   30321 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 17:04:09.003359   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:09.059915   30321 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 17:04:09.059934   30321 kic_runner.go:114] Args: [docker exec --privileged multinode-384000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 17:04:09.168252   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:09.225487   30321 machine.go:88] provisioning docker machine ...
	I0223 17:04:09.225538   30321 ubuntu.go:169] provisioning hostname "multinode-384000"
	I0223 17:04:09.225642   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:09.282374   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:09.282759   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:09.282774   30321 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-384000 && echo "multinode-384000" | sudo tee /etc/hostname
	I0223 17:04:09.425849   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-384000
	
	I0223 17:04:09.425930   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:09.548448   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:09.548876   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:09.548889   30321 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-384000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-384000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-384000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:04:09.684703   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:04:09.684733   30321 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:04:09.684752   30321 ubuntu.go:177] setting up certificates
	I0223 17:04:09.684759   30321 provision.go:83] configureAuth start
	I0223 17:04:09.684846   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000
	I0223 17:04:09.742376   30321 provision.go:138] copyHostCerts
	I0223 17:04:09.742423   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:04:09.742480   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:04:09.742490   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:04:09.742643   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:04:09.742822   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:04:09.742854   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:04:09.742859   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:04:09.742935   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:04:09.743059   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:04:09.743095   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:04:09.743100   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:04:09.743167   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:04:09.743309   30321 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.multinode-384000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-384000]
	I0223 17:04:09.867609   30321 provision.go:172] copyRemoteCerts
	I0223 17:04:09.867667   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:04:09.867718   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:09.925349   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:10.020512   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 17:04:10.020606   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:04:10.037904   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 17:04:10.037986   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 17:04:10.055088   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 17:04:10.055168   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 17:04:10.072578   30321 provision.go:86] duration metric: configureAuth took 387.811375ms
	I0223 17:04:10.072594   30321 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:04:10.072760   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:04:10.072828   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:10.132164   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:10.132535   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:10.132548   30321 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:04:10.268194   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:04:10.268210   30321 ubuntu.go:71] root file system type: overlay
	I0223 17:04:10.268316   30321 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:04:10.268391   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:10.325166   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:10.325518   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:10.325567   30321 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:04:10.467391   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:04:10.467471   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:10.524963   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:04:10.525297   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58127 <nil> <nil>}
	I0223 17:04:10.525311   30321 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:04:11.140475   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:04:10.465300927 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 17:04:11.140502   30321 machine.go:91] provisioned docker machine in 1.91500784s
	I0223 17:04:11.140511   30321 client.go:171] LocalClient.Create took 10.527374153s
	I0223 17:04:11.140526   30321 start.go:167] duration metric: libmachine.API.Create for "multinode-384000" took 10.52744427s
	I0223 17:04:11.140539   30321 start.go:300] post-start starting for "multinode-384000" (driver="docker")
	I0223 17:04:11.140553   30321 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:04:11.140629   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:04:11.140685   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:11.198979   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:11.293728   30321 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:04:11.297245   30321 command_runner.go:130] > NAME="Ubuntu"
	I0223 17:04:11.297253   30321 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 17:04:11.297257   30321 command_runner.go:130] > ID=ubuntu
	I0223 17:04:11.297260   30321 command_runner.go:130] > ID_LIKE=debian
	I0223 17:04:11.297264   30321 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 17:04:11.297268   30321 command_runner.go:130] > VERSION_ID="20.04"
	I0223 17:04:11.297274   30321 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 17:04:11.297279   30321 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 17:04:11.297283   30321 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 17:04:11.297298   30321 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 17:04:11.297302   30321 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 17:04:11.297306   30321 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 17:04:11.297364   30321 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:04:11.297374   30321 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:04:11.297381   30321 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:04:11.297386   30321 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:04:11.297396   30321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:04:11.297493   30321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:04:11.297665   30321 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:04:11.297677   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /etc/ssl/certs/248852.pem
	I0223 17:04:11.297860   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:04:11.305048   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:04:11.322835   30321 start.go:303] post-start completed in 182.282994ms
	I0223 17:04:11.323356   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000
	I0223 17:04:11.382358   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:04:11.382804   30321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:04:11.382865   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:11.440328   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:11.532728   30321 command_runner.go:130] > 5%!
	(MISSING)I0223 17:04:11.532809   30321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:04:11.537424   30321 command_runner.go:130] > 93G
	I0223 17:04:11.537436   30321 start.go:128] duration metric: createHost completed in 10.968179606s
	I0223 17:04:11.537447   30321 start.go:83] releasing machines lock for "multinode-384000", held for 10.968298967s
	I0223 17:04:11.537525   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000
	I0223 17:04:11.594419   30321 ssh_runner.go:195] Run: cat /version.json
	I0223 17:04:11.594443   30321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 17:04:11.594501   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:11.594523   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:11.657132   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:11.657162   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:11.750436   30321 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0223 17:04:11.803827   30321 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 17:04:11.805802   30321 ssh_runner.go:195] Run: systemctl --version
	I0223 17:04:11.810782   30321 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0223 17:04:11.810804   30321 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0223 17:04:11.810894   30321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 17:04:11.815548   30321 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 17:04:11.815559   30321 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 17:04:11.815571   30321 command_runner.go:130] > Device: a6h/166d	Inode: 2885211     Links: 1
	I0223 17:04:11.815577   30321 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 17:04:11.815585   30321 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 17:04:11.815594   30321 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 17:04:11.815600   30321 command_runner.go:130] > Change: 2023-02-24 00:41:32.964225417 +0000
	I0223 17:04:11.815607   30321 command_runner.go:130] >  Birth: -
	I0223 17:04:11.815859   30321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 17:04:11.836126   30321 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 17:04:11.836196   30321 ssh_runner.go:195] Run: which cri-dockerd
	I0223 17:04:11.839967   30321 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 17:04:11.840207   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 17:04:11.847579   30321 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 17:04:11.860245   30321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 17:04:11.874957   30321 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 17:04:11.874986   30321 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 17:04:11.874997   30321 start.go:485] detecting cgroup driver to use...
	I0223 17:04:11.875008   30321 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:04:11.875085   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:04:11.887495   30321 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 17:04:11.887507   30321 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 17:04:11.888331   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 17:04:11.896733   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:04:11.905073   30321 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:04:11.905130   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:04:11.913621   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:04:11.922108   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:04:11.930637   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:04:11.938936   30321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:04:11.946966   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:04:11.955500   30321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:04:11.962229   30321 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 17:04:11.962816   30321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:04:11.970192   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:04:12.035519   30321 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:04:12.107992   30321 start.go:485] detecting cgroup driver to use...
	I0223 17:04:12.108010   30321 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:04:12.108082   30321 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:04:12.118212   30321 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 17:04:12.118339   30321 command_runner.go:130] > [Unit]
	I0223 17:04:12.118351   30321 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 17:04:12.118361   30321 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 17:04:12.118373   30321 command_runner.go:130] > BindsTo=containerd.service
	I0223 17:04:12.118381   30321 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 17:04:12.118386   30321 command_runner.go:130] > Wants=network-online.target
	I0223 17:04:12.118391   30321 command_runner.go:130] > Requires=docker.socket
	I0223 17:04:12.118395   30321 command_runner.go:130] > StartLimitBurst=3
	I0223 17:04:12.118399   30321 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 17:04:12.118403   30321 command_runner.go:130] > [Service]
	I0223 17:04:12.118406   30321 command_runner.go:130] > Type=notify
	I0223 17:04:12.118410   30321 command_runner.go:130] > Restart=on-failure
	I0223 17:04:12.118417   30321 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 17:04:12.118430   30321 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 17:04:12.118438   30321 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 17:04:12.118445   30321 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 17:04:12.118456   30321 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 17:04:12.118465   30321 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 17:04:12.118474   30321 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 17:04:12.118488   30321 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 17:04:12.118497   30321 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 17:04:12.118503   30321 command_runner.go:130] > ExecStart=
	I0223 17:04:12.118522   30321 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 17:04:12.118533   30321 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 17:04:12.118542   30321 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 17:04:12.118548   30321 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 17:04:12.118553   30321 command_runner.go:130] > LimitNOFILE=infinity
	I0223 17:04:12.118558   30321 command_runner.go:130] > LimitNPROC=infinity
	I0223 17:04:12.118563   30321 command_runner.go:130] > LimitCORE=infinity
	I0223 17:04:12.118570   30321 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 17:04:12.118577   30321 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 17:04:12.118583   30321 command_runner.go:130] > TasksMax=infinity
	I0223 17:04:12.118588   30321 command_runner.go:130] > TimeoutStartSec=0
	I0223 17:04:12.118597   30321 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 17:04:12.118608   30321 command_runner.go:130] > Delegate=yes
	I0223 17:04:12.118616   30321 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 17:04:12.118629   30321 command_runner.go:130] > KillMode=process
	I0223 17:04:12.118642   30321 command_runner.go:130] > [Install]
	I0223 17:04:12.118648   30321 command_runner.go:130] > WantedBy=multi-user.target
	I0223 17:04:12.119210   30321 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:04:12.119274   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:04:12.131181   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:04:12.144963   30321 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 17:04:12.144977   30321 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 17:04:12.145841   30321 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:04:12.253094   30321 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:04:12.320407   30321 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:04:12.320430   30321 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:04:12.361491   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:04:12.439781   30321 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:04:12.673580   30321 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:04:12.743275   30321 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 17:04:12.743401   30321 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 17:04:12.823552   30321 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:04:12.893863   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:04:12.961594   30321 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 17:04:12.981225   30321 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 17:04:12.981312   30321 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 17:04:12.985616   30321 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 17:04:12.985626   30321 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 17:04:12.985631   30321 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0223 17:04:12.985640   30321 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 17:04:12.985647   30321 command_runner.go:130] > Access: 2023-02-24 01:04:12.969300763 +0000
	I0223 17:04:12.985652   30321 command_runner.go:130] > Modify: 2023-02-24 01:04:12.969300763 +0000
	I0223 17:04:12.985656   30321 command_runner.go:130] > Change: 2023-02-24 01:04:12.978300762 +0000
	I0223 17:04:12.985660   30321 command_runner.go:130] >  Birth: -
	I0223 17:04:12.985674   30321 start.go:553] Will wait 60s for crictl version
	I0223 17:04:12.985719   30321 ssh_runner.go:195] Run: which crictl
	I0223 17:04:12.989681   30321 command_runner.go:130] > /usr/bin/crictl
	I0223 17:04:12.989738   30321 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 17:04:13.088268   30321 command_runner.go:130] > Version:  0.1.0
	I0223 17:04:13.088281   30321 command_runner.go:130] > RuntimeName:  docker
	I0223 17:04:13.088285   30321 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 17:04:13.088289   30321 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 17:04:13.090557   30321 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 17:04:13.090640   30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:04:13.114450   30321 command_runner.go:130] > 23.0.1
	I0223 17:04:13.116236   30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:04:13.139645   30321 command_runner.go:130] > 23.0.1
	I0223 17:04:13.185785   30321 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 17:04:13.185936   30321 cli_runner.go:164] Run: docker exec -t multinode-384000 dig +short host.docker.internal
	I0223 17:04:13.298095   30321 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:04:13.298210   30321 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:04:13.302960   30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:04:13.313024   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:13.370101   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:13.370177   30321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:04:13.388852   30321 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 17:04:13.388872   30321 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 17:04:13.388876   30321 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 17:04:13.388882   30321 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 17:04:13.388887   30321 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 17:04:13.388892   30321 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 17:04:13.388897   30321 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 17:04:13.388904   30321 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 17:04:13.390338   30321 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 17:04:13.390351   30321 docker.go:560] Images already preloaded, skipping extraction
	I0223 17:04:13.390440   30321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:04:13.408340   30321 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 17:04:13.408353   30321 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 17:04:13.408357   30321 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 17:04:13.408366   30321 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 17:04:13.408373   30321 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 17:04:13.408385   30321 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 17:04:13.408403   30321 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 17:04:13.408411   30321 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 17:04:13.409739   30321 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 17:04:13.409753   30321 cache_images.go:84] Images are preloaded, skipping loading
	I0223 17:04:13.409855   30321 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:04:13.434482   30321 command_runner.go:130] > cgroupfs
	I0223 17:04:13.436200   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:04:13.436212   30321 cni.go:136] 1 nodes found, recommending kindnet
	I0223 17:04:13.436228   30321 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:04:13.436245   30321 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-384000 NodeName:multinode-384000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:04:13.436376   30321 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-384000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:04:13.436452   30321 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-384000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 17:04:13.436528   30321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 17:04:13.443959   30321 command_runner.go:130] > kubeadm
	I0223 17:04:13.443967   30321 command_runner.go:130] > kubectl
	I0223 17:04:13.443971   30321 command_runner.go:130] > kubelet
	I0223 17:04:13.444627   30321 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:04:13.444682   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 17:04:13.452079   30321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0223 17:04:13.464837   30321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:04:13.478352   30321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0223 17:04:13.493288   30321 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:04:13.497118   30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:04:13.506890   30321 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000 for IP: 192.168.58.2
	I0223 17:04:13.506907   30321 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.507084   30321 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:04:13.507153   30321 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:04:13.507202   30321 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key
	I0223 17:04:13.507218   30321 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt with IP's: []
	I0223 17:04:13.627945   30321 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt ...
	I0223 17:04:13.627961   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt: {Name:mkd359862379ab1055d74401ef8de9196a9ae6b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.628235   30321 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key ...
	I0223 17:04:13.628243   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key: {Name:mkf7637f021e05129181fedc91db0006be87932e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.628430   30321 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key.cee25041
	I0223 17:04:13.628445   30321 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 17:04:13.859191   30321 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt.cee25041 ...
	I0223 17:04:13.859204   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt.cee25041: {Name:mkc78a39ff6b63467a6908b8cbc3acb08372be96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.859458   30321 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key.cee25041 ...
	I0223 17:04:13.859467   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key.cee25041: {Name:mka676fb69891e26111a30d7dfc27b7bc2bb5bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.859653   30321 certs.go:333] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt
	I0223 17:04:13.859802   30321 certs.go:337] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key
	I0223 17:04:13.860385   30321 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key
	I0223 17:04:13.860470   30321 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt with IP's: []
	I0223 17:04:13.917792   30321 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt ...
	I0223 17:04:13.917805   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt: {Name:mk4981f40b576090a5abf96b77f791333731295e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.918048   30321 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key ...
	I0223 17:04:13.918055   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key: {Name:mkcac27c753957cd07ba28de35fc56a0e42e26b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:13.918221   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 17:04:13.918251   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 17:04:13.918271   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 17:04:13.918337   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 17:04:13.918376   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 17:04:13.918410   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 17:04:13.918427   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 17:04:13.918444   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 17:04:13.918535   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:04:13.918582   30321 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:04:13.918593   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:04:13.918628   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:04:13.918662   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:04:13.918692   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:04:13.918756   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:04:13.918786   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:13.918806   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem -> /usr/share/ca-certificates/24885.pem
	I0223 17:04:13.918844   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /usr/share/ca-certificates/248852.pem
	I0223 17:04:13.919356   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 17:04:13.938023   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 17:04:13.955250   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 17:04:13.972692   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0223 17:04:13.990044   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:04:14.007252   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:04:14.024619   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:04:14.041921   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:04:14.059218   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:04:14.076771   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:04:14.094193   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:04:14.111961   30321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 17:04:14.126469   30321 ssh_runner.go:195] Run: openssl version
	I0223 17:04:14.131792   30321 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 17:04:14.132238   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:04:14.140450   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:14.144477   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:14.144572   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:14.144618   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:04:14.149818   30321 command_runner.go:130] > b5213941
	I0223 17:04:14.150253   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:04:14.158960   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:04:14.167444   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:04:14.172027   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:04:14.172071   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:04:14.172113   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:04:14.177443   30321 command_runner.go:130] > 51391683
	I0223 17:04:14.177808   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:04:14.186090   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:04:14.194695   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:04:14.198627   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:04:14.198651   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:04:14.198694   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:04:14.203810   30321 command_runner.go:130] > 3ec20f2e
	I0223 17:04:14.204262   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:04:14.212871   30321 kubeadm.go:401] StartCluster: {Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:04:14.212999   30321 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:04:14.232790   30321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 17:04:14.240734   30321 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0223 17:04:14.240745   30321 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0223 17:04:14.240750   30321 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0223 17:04:14.240811   30321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 17:04:14.248331   30321 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 17:04:14.248387   30321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:04:14.256197   30321 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 17:04:14.256208   30321 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 17:04:14.256213   30321 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 17:04:14.256223   30321 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:04:14.256247   30321 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:04:14.256266   30321 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 17:04:14.305883   30321 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 17:04:14.305888   30321 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0223 17:04:14.305924   30321 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 17:04:14.305933   30321 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 17:04:14.413213   30321 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:04:14.413227   30321 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:04:14.413305   30321 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:04:14.413307   30321 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:04:14.413423   30321 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:04:14.413435   30321 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:04:14.547463   30321 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:04:14.547474   30321 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:04:14.589806   30321 out.go:204]   - Generating certificates and keys ...
	I0223 17:04:14.589916   30321 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 17:04:14.589925   30321 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 17:04:14.590004   30321 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 17:04:14.590015   30321 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 17:04:14.794940   30321 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 17:04:14.794953   30321 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 17:04:14.971748   30321 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 17:04:14.971750   30321 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0223 17:04:15.161013   30321 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 17:04:15.161018   30321 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0223 17:04:15.479871   30321 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 17:04:15.479887   30321 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0223 17:04:15.741064   30321 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 17:04:15.741079   30321 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0223 17:04:15.741217   30321 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-384000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 17:04:15.741226   30321 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-384000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 17:04:15.854118   30321 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 17:04:15.854131   30321 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0223 17:04:15.854257   30321 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-384000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 17:04:15.854266   30321 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-384000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 17:04:15.997908   30321 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 17:04:15.997925   30321 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 17:04:16.260935   30321 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 17:04:16.260952   30321 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 17:04:16.444983   30321 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 17:04:16.444998   30321 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0223 17:04:16.445075   30321 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:04:16.445112   30321 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:04:16.550868   30321 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:04:16.550878   30321 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:04:16.859517   30321 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:04:16.859526   30321 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:04:16.897545   30321 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:04:16.897557   30321 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:04:17.001029   30321 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:04:17.001040   30321 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:04:17.011314   30321 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:04:17.011327   30321 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:04:17.012017   30321 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:04:17.012034   30321 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:04:17.012095   30321 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 17:04:17.012108   30321 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 17:04:17.089542   30321 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:04:17.089579   30321 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:04:17.111045   30321 out.go:204]   - Booting up control plane ...
	I0223 17:04:17.111119   30321 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:04:17.111126   30321 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:04:17.111212   30321 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:04:17.111224   30321 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:04:17.111291   30321 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:04:17.111301   30321 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:04:17.111392   30321 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:04:17.111396   30321 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:04:17.111556   30321 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:04:17.111560   30321 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:04:25.097582   30321 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002824 seconds
	I0223 17:04:25.097607   30321 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002824 seconds
	I0223 17:04:25.097826   30321 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 17:04:25.097829   30321 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 17:04:25.105533   30321 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 17:04:25.105554   30321 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 17:04:25.621818   30321 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 17:04:25.621824   30321 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0223 17:04:25.621972   30321 kubeadm.go:322] [mark-control-plane] Marking the node multinode-384000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 17:04:25.621978   30321 command_runner.go:130] > [mark-control-plane] Marking the node multinode-384000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 17:04:26.129954   30321 kubeadm.go:322] [bootstrap-token] Using token: cx2c6w.bbyzuhv5cn3ewcwq
	I0223 17:04:26.129957   30321 command_runner.go:130] > [bootstrap-token] Using token: cx2c6w.bbyzuhv5cn3ewcwq
	I0223 17:04:26.169884   30321 out.go:204]   - Configuring RBAC rules ...
	I0223 17:04:26.169990   30321 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 17:04:26.170008   30321 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 17:04:26.171961   30321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 17:04:26.171976   30321 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 17:04:26.212050   30321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 17:04:26.212052   30321 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 17:04:26.215446   30321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 17:04:26.215458   30321 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 17:04:26.218497   30321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 17:04:26.218502   30321 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 17:04:26.220593   30321 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 17:04:26.220604   30321 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 17:04:26.228549   30321 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 17:04:26.228566   30321 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 17:04:26.374035   30321 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 17:04:26.374049   30321 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 17:04:26.575276   30321 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 17:04:26.575307   30321 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 17:04:26.575752   30321 kubeadm.go:322] 
	I0223 17:04:26.575839   30321 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 17:04:26.575849   30321 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0223 17:04:26.575856   30321 kubeadm.go:322] 
	I0223 17:04:26.575921   30321 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 17:04:26.575927   30321 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0223 17:04:26.575934   30321 kubeadm.go:322] 
	I0223 17:04:26.575962   30321 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 17:04:26.575978   30321 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0223 17:04:26.576039   30321 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 17:04:26.576046   30321 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 17:04:26.576106   30321 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 17:04:26.576112   30321 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 17:04:26.576135   30321 kubeadm.go:322] 
	I0223 17:04:26.576195   30321 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 17:04:26.576202   30321 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0223 17:04:26.576212   30321 kubeadm.go:322] 
	I0223 17:04:26.576255   30321 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 17:04:26.576261   30321 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 17:04:26.576267   30321 kubeadm.go:322] 
	I0223 17:04:26.576329   30321 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 17:04:26.576338   30321 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0223 17:04:26.576415   30321 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 17:04:26.576424   30321 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 17:04:26.576476   30321 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 17:04:26.576482   30321 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 17:04:26.576487   30321 kubeadm.go:322] 
	I0223 17:04:26.576573   30321 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 17:04:26.576581   30321 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0223 17:04:26.576642   30321 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 17:04:26.576648   30321 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0223 17:04:26.576651   30321 kubeadm.go:322] 
	I0223 17:04:26.576711   30321 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cx2c6w.bbyzuhv5cn3ewcwq \
	I0223 17:04:26.576716   30321 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token cx2c6w.bbyzuhv5cn3ewcwq \
	I0223 17:04:26.576804   30321 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 \
	I0223 17:04:26.576810   30321 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 \
	I0223 17:04:26.576824   30321 kubeadm.go:322] 	--control-plane 
	I0223 17:04:26.576828   30321 command_runner.go:130] > 	--control-plane 
	I0223 17:04:26.576830   30321 kubeadm.go:322] 
	I0223 17:04:26.576928   30321 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 17:04:26.576935   30321 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0223 17:04:26.576938   30321 kubeadm.go:322] 
	I0223 17:04:26.577007   30321 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cx2c6w.bbyzuhv5cn3ewcwq \
	I0223 17:04:26.577014   30321 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token cx2c6w.bbyzuhv5cn3ewcwq \
	I0223 17:04:26.577099   30321 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 
	I0223 17:04:26.577110   30321 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 
	I0223 17:04:26.579881   30321 kubeadm.go:322] W0224 01:04:14.298932    1296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 17:04:26.579888   30321 command_runner.go:130] ! W0224 01:04:14.298932    1296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 17:04:26.580034   30321 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 17:04:26.580037   30321 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 17:04:26.580140   30321 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:04:26.580147   30321 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:04:26.580159   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:04:26.580168   30321 cni.go:136] 1 nodes found, recommending kindnet
	I0223 17:04:26.620014   30321 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 17:04:26.656873   30321 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 17:04:26.663017   30321 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 17:04:26.663039   30321 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 17:04:26.663056   30321 command_runner.go:130] > Device: a6h/166d	Inode: 2757559     Links: 1
	I0223 17:04:26.663074   30321 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 17:04:26.663087   30321 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 17:04:26.663101   30321 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 17:04:26.663115   30321 command_runner.go:130] > Change: 2023-02-24 00:41:32.136225471 +0000
	I0223 17:04:26.663126   30321 command_runner.go:130] >  Birth: -
	I0223 17:04:26.663251   30321 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 17:04:26.663263   30321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 17:04:26.680707   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 17:04:27.208803   30321 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0223 17:04:27.212534   30321 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0223 17:04:27.218738   30321 command_runner.go:130] > serviceaccount/kindnet created
	I0223 17:04:27.225995   30321 command_runner.go:130] > daemonset.apps/kindnet created
	I0223 17:04:27.231986   30321 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 17:04:27.232069   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:27.232070   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510 minikube.k8s.io/name=multinode-384000 minikube.k8s.io/updated_at=2023_02_23T17_04_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:27.240193   30321 command_runner.go:130] > -16
	I0223 17:04:27.240226   30321 ops.go:34] apiserver oom_adj: -16
	I0223 17:04:27.311814   30321 command_runner.go:130] > node/multinode-384000 labeled
	I0223 17:04:27.311869   30321 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0223 17:04:27.311940   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:27.408073   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:27.908262   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:27.968202   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:28.408830   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:28.475246   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:28.908280   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:28.967612   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:29.408852   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:29.472978   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:29.908354   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:29.971761   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:30.409067   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:30.474873   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:30.908464   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:30.972109   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:31.408178   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:31.472779   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:31.908615   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:31.972454   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:32.408376   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:32.468254   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:32.908353   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:32.969151   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:33.408382   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:33.470935   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:33.908195   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:33.971920   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:34.408480   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:34.473271   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:34.908383   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:34.973062   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:35.408263   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:35.476833   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:35.908405   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:35.972479   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:36.408363   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:36.472255   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:36.908284   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:36.972764   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:37.408510   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:37.480102   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:37.908328   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:37.969697   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:38.408314   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:38.475073   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:38.908389   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:38.971204   30321 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 17:04:39.408365   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 17:04:39.472020   30321 command_runner.go:130] > NAME      SECRETS   AGE
	I0223 17:04:39.472031   30321 command_runner.go:130] > default   0         0s
	I0223 17:04:39.475148   30321 kubeadm.go:1073] duration metric: took 12.243277339s to wait for elevateKubeSystemPrivileges.
	I0223 17:04:39.475171   30321 kubeadm.go:403] StartCluster complete in 25.262585357s
	I0223 17:04:39.475192   30321 settings.go:142] acquiring lock: {Name:mk850986f273a9d917e0b12c78b43b3396ccf03c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:39.475263   30321 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:04:39.475780   30321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/kubeconfig: {Name:mk7d15723b32e59bb8ea0777461e49fb0d77cb39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:04:39.504308   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 17:04:39.504345   30321 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 17:04:39.504407   30321 addons.go:65] Setting storage-provisioner=true in profile "multinode-384000"
	I0223 17:04:39.504425   30321 addons.go:227] Setting addon storage-provisioner=true in "multinode-384000"
	I0223 17:04:39.504424   30321 addons.go:65] Setting default-storageclass=true in profile "multinode-384000"
	I0223 17:04:39.504455   30321 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-384000"
	I0223 17:04:39.504475   30321 host.go:66] Checking if "multinode-384000" exists ...
	I0223 17:04:39.504487   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:04:39.504724   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:39.504817   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:39.507756   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:04:39.508023   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:04:39.508662   30321 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 17:04:39.508991   30321 round_trippers.go:463] GET https://127.0.0.1:58131/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 17:04:39.508999   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:39.509007   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:39.509013   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:39.543889   30321 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0223 17:04:39.543911   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:39.543921   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:39 GMT
	I0223 17:04:39.543928   30321 round_trippers.go:580]     Audit-Id: db55fe89-e714-4975-8cf3-8d02b7124d3f
	I0223 17:04:39.543937   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:39.543946   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:39.543955   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:39.543963   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:39.543974   30321 round_trippers.go:580]     Content-Length: 291
	I0223 17:04:39.544012   30321 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"228","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 17:04:39.544427   30321 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"228","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 17:04:39.544469   30321 round_trippers.go:463] PUT https://127.0.0.1:58131/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 17:04:39.544475   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:39.544482   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:39.544489   30321 round_trippers.go:473]     Content-Type: application/json
	I0223 17:04:39.544494   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:39.550583   30321 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 17:04:39.550614   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:39.550626   30321 round_trippers.go:580]     Content-Length: 291
	I0223 17:04:39.550637   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:39 GMT
	I0223 17:04:39.550646   30321 round_trippers.go:580]     Audit-Id: 518b0559-3900-4a94-a869-e3e161f29070
	I0223 17:04:39.550657   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:39.550673   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:39.550701   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:39.550712   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:39.551287   30321 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"316","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 17:04:39.577452   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:04:39.577716   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:04:39.599316   30321 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 17:04:39.599660   30321 round_trippers.go:463] GET https://127.0.0.1:58131/apis/storage.k8s.io/v1/storageclasses
	I0223 17:04:39.636236   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:39.636254   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:39.636267   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:39.636282   30321 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 17:04:39.636302   30321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 17:04:39.636449   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:39.640311   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:39.640336   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:39.640345   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:39.640352   30321 round_trippers.go:580]     Content-Length: 109
	I0223 17:04:39.640358   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:39 GMT
	I0223 17:04:39.640363   30321 round_trippers.go:580]     Audit-Id: 072bf91e-cc84-40b5-9243-826b1de57f46
	I0223 17:04:39.640368   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:39.640372   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:39.640377   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:39.640410   30321 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"322"},"items":[]}
	I0223 17:04:39.640683   30321 addons.go:227] Setting addon default-storageclass=true in "multinode-384000"
	I0223 17:04:39.640705   30321 host.go:66] Checking if "multinode-384000" exists ...
	I0223 17:04:39.641150   30321 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:04:39.645405   30321 command_runner.go:130] > apiVersion: v1
	I0223 17:04:39.645434   30321 command_runner.go:130] > data:
	I0223 17:04:39.645444   30321 command_runner.go:130] >   Corefile: |
	I0223 17:04:39.645456   30321 command_runner.go:130] >     .:53 {
	I0223 17:04:39.645467   30321 command_runner.go:130] >         errors
	I0223 17:04:39.645486   30321 command_runner.go:130] >         health {
	I0223 17:04:39.645505   30321 command_runner.go:130] >            lameduck 5s
	I0223 17:04:39.645513   30321 command_runner.go:130] >         }
	I0223 17:04:39.645525   30321 command_runner.go:130] >         ready
	I0223 17:04:39.645547   30321 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 17:04:39.645558   30321 command_runner.go:130] >            pods insecure
	I0223 17:04:39.645575   30321 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 17:04:39.645587   30321 command_runner.go:130] >            ttl 30
	I0223 17:04:39.645592   30321 command_runner.go:130] >         }
	I0223 17:04:39.645599   30321 command_runner.go:130] >         prometheus :9153
	I0223 17:04:39.645603   30321 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 17:04:39.645609   30321 command_runner.go:130] >            max_concurrent 1000
	I0223 17:04:39.645614   30321 command_runner.go:130] >         }
	I0223 17:04:39.645617   30321 command_runner.go:130] >         cache 30
	I0223 17:04:39.645621   30321 command_runner.go:130] >         loop
	I0223 17:04:39.645624   30321 command_runner.go:130] >         reload
	I0223 17:04:39.645637   30321 command_runner.go:130] >         loadbalance
	I0223 17:04:39.645642   30321 command_runner.go:130] >     }
	I0223 17:04:39.645648   30321 command_runner.go:130] > kind: ConfigMap
	I0223 17:04:39.645652   30321 command_runner.go:130] > metadata:
	I0223 17:04:39.645663   30321 command_runner.go:130] >   creationTimestamp: "2023-02-24T01:04:26Z"
	I0223 17:04:39.645669   30321 command_runner.go:130] >   name: coredns
	I0223 17:04:39.645673   30321 command_runner.go:130] >   namespace: kube-system
	I0223 17:04:39.645682   30321 command_runner.go:130] >   resourceVersion: "224"
	I0223 17:04:39.645688   30321 command_runner.go:130] >   uid: 8e4da503-6c9a-4528-9e22-a1db71461ae8
	I0223 17:04:39.645892   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 17:04:39.712402   30321 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 17:04:39.712416   30321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 17:04:39.712483   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:39.712586   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:39.776504   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:04:39.967284   30321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 17:04:39.968908   30321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 17:04:40.051562   30321 round_trippers.go:463] GET https://127.0.0.1:58131/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 17:04:40.051586   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.051638   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.051653   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.054834   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:40.054849   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.054856   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.054863   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.054869   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.054874   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.054880   30321 round_trippers.go:580]     Content-Length: 291
	I0223 17:04:40.054889   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.054898   30321 round_trippers.go:580]     Audit-Id: 9002b302-d768-4ecf-b06a-c93d260628cb
	I0223 17:04:40.054916   30321 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"359","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 17:04:40.054982   30321 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-384000" context rescaled to 1 replicas
	I0223 17:04:40.055005   30321 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 17:04:40.055479   30321 command_runner.go:130] > configmap/coredns replaced
	I0223 17:04:40.078224   30321 out.go:177] * Verifying Kubernetes components...
	I0223 17:04:40.078280   30321 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0223 17:04:40.152322   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:04:40.284666   30321 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0223 17:04:40.314959   30321 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0223 17:04:40.319659   30321 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0223 17:04:40.354965   30321 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 17:04:40.360768   30321 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 17:04:40.367125   30321 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0223 17:04:40.374883   30321 command_runner.go:130] > pod/storage-provisioner created
	I0223 17:04:40.381648   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:04:40.407711   30321 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0223 17:04:40.450427   30321 addons.go:492] enable addons completed in 946.056078ms: enabled=[default-storageclass storage-provisioner]
	I0223 17:04:40.469437   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:04:40.469635   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:04:40.469888   30321 node_ready.go:35] waiting up to 6m0s for node "multinode-384000" to be "Ready" ...
	I0223 17:04:40.469938   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:40.469943   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.469951   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.469957   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.472751   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.472774   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.472780   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.472785   30321 round_trippers.go:580]     Audit-Id: f2708e51-e93b-4c72-893b-657a733849f4
	I0223 17:04:40.472791   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.472797   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.472802   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.472812   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.472898   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:40.473336   30321 node_ready.go:49] node "multinode-384000" has status "Ready":"True"
	I0223 17:04:40.473345   30321 node_ready.go:38] duration metric: took 3.439793ms waiting for node "multinode-384000" to be "Ready" ...
	I0223 17:04:40.473353   30321 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:04:40.473402   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:40.473407   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.473413   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.473418   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.476930   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:40.476947   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.476956   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.476963   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.476972   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.476979   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.476986   30321 round_trippers.go:580]     Audit-Id: eaf7bf69-ef2b-4746-a20e-cca80ce1fa0e
	I0223 17:04:40.476995   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.478350   30321 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"370"},"items":[{"metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"355","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47
f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 60224 chars]
	I0223 17:04:40.480900   30321 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-bvdps" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:40.480955   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:40.480962   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.480968   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.480974   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.483591   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.483604   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.483611   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.483619   30321 round_trippers.go:580]     Audit-Id: ad0a48df-19d1-4c81-909f-b1e0cfa60945
	I0223 17:04:40.483625   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.483631   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.483671   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.483682   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.483795   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"355","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 17:04:40.484066   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:40.484073   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.484079   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.484084   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.486541   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.486552   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.486558   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.486564   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.486568   30321 round_trippers.go:580]     Audit-Id: a77986f4-7773-4253-8f44-c975169bb0dd
	I0223 17:04:40.486573   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.486577   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.486582   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.486893   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:40.987396   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:40.987423   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.987431   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.987437   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.990149   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.990170   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.990183   30321 round_trippers.go:580]     Audit-Id: 1b950b03-4dac-4e47-bf6b-7a0b45057e81
	I0223 17:04:40.990196   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.990208   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.990215   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.990221   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.990226   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.990304   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"355","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 17:04:40.990589   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:40.990596   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:40.990604   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:40.990613   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:40.993057   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:40.993072   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:40.993080   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:40.993086   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:40.993092   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:40 GMT
	I0223 17:04:40.993097   30321 round_trippers.go:580]     Audit-Id: ece30c7c-13a2-44a0-8628-2d40eba39982
	I0223 17:04:40.993102   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:40.993108   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:40.993164   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:41.487216   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:41.487235   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:41.487242   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:41.487247   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:41.489636   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:41.489652   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:41.489659   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:41.489665   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:41.489673   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:41 GMT
	I0223 17:04:41.489679   30321 round_trippers.go:580]     Audit-Id: 6202d3fd-f784-4864-a52a-08ea30e2a125
	I0223 17:04:41.489688   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:41.489696   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:41.490115   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"355","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 17:04:41.490463   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:41.490471   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:41.490478   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:41.490483   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:41.492960   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:41.492971   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:41.492977   30321 round_trippers.go:580]     Audit-Id: d649c63a-5328-4d65-b9c9-b0516d5f6975
	I0223 17:04:41.492982   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:41.492987   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:41.492991   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:41.492996   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:41.493001   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:41 GMT
	I0223 17:04:41.493072   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:41.987503   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:41.987519   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:41.987528   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:41.987535   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:41.990443   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:41.990456   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:41.990464   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:41.990471   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:41.990478   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:41.990485   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:41 GMT
	I0223 17:04:41.990490   30321 round_trippers.go:580]     Audit-Id: 4f05cb4f-158c-4231-9712-f5cb71c8dbd7
	I0223 17:04:41.990496   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:41.990574   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:41.990844   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:41.990851   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:41.990856   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:41.990865   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:41.993113   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:41.993128   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:41.993138   30321 round_trippers.go:580]     Audit-Id: ee936449-e5e5-4357-9b80-6e42dfcffdd7
	I0223 17:04:41.993145   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:41.993151   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:41.993156   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:41.993161   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:41.993166   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:41 GMT
	I0223 17:04:41.993289   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:42.487870   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:42.487892   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:42.487905   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:42.487916   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:42.491978   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:42.491998   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:42.492006   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:42.492011   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:42.492016   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:42 GMT
	I0223 17:04:42.492020   30321 round_trippers.go:580]     Audit-Id: ee8e1df6-8b00-4abe-ae71-69cd18b3e571
	I0223 17:04:42.492025   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:42.492030   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:42.492104   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:42.492483   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:42.492491   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:42.492497   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:42.492503   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:42.494690   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:42.494701   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:42.494707   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:42.494712   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:42.494720   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:42.494725   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:42 GMT
	I0223 17:04:42.494730   30321 round_trippers.go:580]     Audit-Id: 49af8822-ddb0-43e2-a660-91492a65acfd
	I0223 17:04:42.494736   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:42.494804   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:42.494995   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:42.987325   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:42.987341   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:42.987355   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:42.987369   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:42.990503   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:42.990521   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:42.990528   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:42.990533   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:42 GMT
	I0223 17:04:42.990542   30321 round_trippers.go:580]     Audit-Id: 4da14174-338d-4a5b-89bd-1bc55e0f006c
	I0223 17:04:42.990564   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:42.990593   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:42.990601   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:42.990686   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:42.991043   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:42.991055   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:42.991066   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:42.991078   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:42.997444   30321 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0223 17:04:42.997465   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:42.997476   30321 round_trippers.go:580]     Audit-Id: f9e2c73f-393d-4ca6-ab25-9bf1ace4b421
	I0223 17:04:42.997484   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:42.997493   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:42.997501   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:42.997509   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:42.997522   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:42 GMT
	I0223 17:04:42.997615   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:43.489176   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:43.489190   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:43.489197   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:43.489202   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:43.492545   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:43.492558   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:43.492563   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:43 GMT
	I0223 17:04:43.492568   30321 round_trippers.go:580]     Audit-Id: 6a4753fc-bcef-4349-833f-19c985476afd
	I0223 17:04:43.492573   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:43.492577   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:43.492582   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:43.492601   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:43.492811   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:43.493104   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:43.493112   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:43.493118   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:43.493123   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:43.495465   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:43.495478   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:43.495490   30321 round_trippers.go:580]     Audit-Id: ee99fc4a-b92f-4686-9f3f-f19252ce8f5b
	I0223 17:04:43.495499   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:43.495510   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:43.495516   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:43.495522   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:43.495526   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:43 GMT
	I0223 17:04:43.495899   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:43.987168   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:43.987185   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:43.987192   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:43.987197   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:43.989970   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:43.989982   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:43.989989   30321 round_trippers.go:580]     Audit-Id: 1eebd99f-e8bd-4bb8-8f00-aaec045b11bd
	I0223 17:04:43.989994   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:43.989999   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:43.990004   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:43.990009   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:43.990014   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:43 GMT
	I0223 17:04:43.990083   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:43.990361   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:43.990370   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:43.990384   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:43.990398   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:43.992599   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:43.992617   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:43.992631   30321 round_trippers.go:580]     Audit-Id: 486d8a13-0bb0-484d-b49b-452ce096560e
	I0223 17:04:43.992641   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:43.992647   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:43.992651   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:43.992656   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:43.992662   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:43 GMT
	I0223 17:04:43.992876   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:44.487443   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:44.487487   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:44.487592   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:44.487605   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:44.492298   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:44.492312   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:44.492318   30321 round_trippers.go:580]     Audit-Id: 8acf453d-b615-4bf7-8074-e74f3a1dc912
	I0223 17:04:44.492325   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:44.492334   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:44.492341   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:44.492348   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:44.492354   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:44 GMT
	I0223 17:04:44.492432   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:44.492724   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:44.492730   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:44.492737   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:44.492742   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:44.494990   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:44.494999   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:44.495005   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:44.495010   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:44.495015   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:44.495020   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:44.495027   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:44 GMT
	I0223 17:04:44.495033   30321 round_trippers.go:580]     Audit-Id: bfa16d57-e46d-43b8-8889-2a2cb311b524
	I0223 17:04:44.495114   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:44.495287   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:44.987618   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:44.987643   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:44.987655   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:44.987665   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:44.991503   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:44.991516   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:44.991522   30321 round_trippers.go:580]     Audit-Id: fa242d57-d807-4a9b-9b12-a2f5afd3fcda
	I0223 17:04:44.991527   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:44.991536   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:44.991543   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:44.991559   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:44.991567   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:44 GMT
	I0223 17:04:44.991634   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:44.991915   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:44.991922   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:44.991928   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:44.991934   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:44.993953   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:44.993964   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:44.993969   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:44.993975   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:44.993979   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:44.993985   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:44 GMT
	I0223 17:04:44.993989   30321 round_trippers.go:580]     Audit-Id: b82d2d1d-2e74-4577-a46b-816a35a0923e
	I0223 17:04:44.993995   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:44.994052   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:45.487349   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:45.487375   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:45.487388   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:45.487397   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:45.491000   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:45.491013   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:45.491019   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:45.491024   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:45 GMT
	I0223 17:04:45.491028   30321 round_trippers.go:580]     Audit-Id: b288164a-ab04-46e5-812a-1d891d4e3d41
	I0223 17:04:45.491033   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:45.491041   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:45.491046   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:45.491353   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:45.491634   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:45.491640   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:45.491646   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:45.491652   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:45.493792   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:45.493803   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:45.493808   30321 round_trippers.go:580]     Audit-Id: 0425ee23-0d59-4b94-acbf-e07e5bf849ac
	I0223 17:04:45.493813   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:45.493818   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:45.493823   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:45.493829   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:45.493833   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:45 GMT
	I0223 17:04:45.493932   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:45.988698   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:45.988713   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:45.988767   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:45.988773   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:45.992827   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:45.992838   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:45.992849   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:45.992855   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:45.992860   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:45.992865   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:45 GMT
	I0223 17:04:45.992870   30321 round_trippers.go:580]     Audit-Id: 1ae792ed-fb0e-4756-beb1-3884d8aacb52
	I0223 17:04:45.992875   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:45.992940   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:45.993213   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:45.993219   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:45.993225   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:45.993230   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:45.995311   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:45.995321   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:45.995326   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:45.995332   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:45 GMT
	I0223 17:04:45.995337   30321 round_trippers.go:580]     Audit-Id: b6dfe7bc-63a9-4655-a67e-987d92c5f38d
	I0223 17:04:45.995343   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:45.995349   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:45.995354   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:45.995402   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:46.488689   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:46.488705   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:46.488712   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:46.488717   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:46.491839   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:46.491850   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:46.491856   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:46 GMT
	I0223 17:04:46.491863   30321 round_trippers.go:580]     Audit-Id: d726c5bf-f539-4592-b838-8e63a45bf193
	I0223 17:04:46.491868   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:46.491872   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:46.491877   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:46.491882   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:46.493129   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:46.493829   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:46.493837   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:46.493843   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:46.493849   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:46.496229   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:46.496240   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:46.496246   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:46.496251   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:46.496256   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:46.496261   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:46 GMT
	I0223 17:04:46.496266   30321 round_trippers.go:580]     Audit-Id: 3ad5e27a-e070-49b7-b987-724afe494bff
	I0223 17:04:46.496271   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:46.496527   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:46.496720   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:46.987737   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:46.987756   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:46.987763   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:46.987768   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:46.990502   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:46.990521   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:46.990527   30321 round_trippers.go:580]     Audit-Id: 36401443-402b-440c-a705-f247b14be0d0
	I0223 17:04:46.990559   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:46.990566   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:46.990571   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:46.990578   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:46.990583   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:46 GMT
	I0223 17:04:46.990661   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:46.990969   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:46.990976   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:46.990982   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:46.990987   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:46.992973   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:04:46.992983   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:46.992989   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:46 GMT
	I0223 17:04:46.992994   30321 round_trippers.go:580]     Audit-Id: c0223492-547c-4c0f-9820-f14bad7e2250
	I0223 17:04:46.992999   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:46.993004   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:46.993010   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:46.993014   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:46.993320   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"315","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:47.487718   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:47.487736   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:47.487784   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:47.487790   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:47.490538   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:47.490554   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:47.490560   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:47.490565   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:47.490570   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:47.490575   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:47.490583   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:47 GMT
	I0223 17:04:47.490590   30321 round_trippers.go:580]     Audit-Id: 3f574b37-3fe3-4104-9d6c-5e499a66c939
	I0223 17:04:47.490713   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:47.491061   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:47.491069   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:47.491075   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:47.491081   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:47.493416   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:47.493428   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:47.493433   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:47.493441   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:47.493446   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:47.493451   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:47 GMT
	I0223 17:04:47.493456   30321 round_trippers.go:580]     Audit-Id: 2a5014cd-6cea-4298-b7db-96ffd5fedfcd
	I0223 17:04:47.493461   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:47.493518   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:47.987154   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:47.987173   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:47.987180   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:47.987185   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:47.990169   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:47.990181   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:47.990187   30321 round_trippers.go:580]     Audit-Id: 8ecb152a-1eba-4d6f-9897-4d97af58b987
	I0223 17:04:47.990192   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:47.990197   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:47.990205   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:47.990210   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:47.990215   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:47 GMT
	I0223 17:04:47.990282   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:47.990558   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:47.990564   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:47.990570   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:47.990575   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:47.992547   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:04:47.992556   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:47.992561   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:47 GMT
	I0223 17:04:47.992567   30321 round_trippers.go:580]     Audit-Id: 27d8c3de-f5a1-425c-8c31-3865396da818
	I0223 17:04:47.992573   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:47.992593   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:47.992599   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:47.992604   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:47.992665   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:48.487227   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:48.487243   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:48.487249   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:48.487254   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:48.490078   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:48.490094   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:48.490104   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:48.490115   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:48.490122   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:48 GMT
	I0223 17:04:48.490127   30321 round_trippers.go:580]     Audit-Id: 7327d450-86d2-4148-a09e-c0553854b072
	I0223 17:04:48.490132   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:48.490138   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:48.490340   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:48.490639   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:48.490646   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:48.490652   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:48.490657   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:48.493002   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:48.493014   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:48.493020   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:48.493025   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:48 GMT
	I0223 17:04:48.493030   30321 round_trippers.go:580]     Audit-Id: ac391ddc-5031-4569-bbc0-475e5b876f9c
	I0223 17:04:48.493036   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:48.493040   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:48.493046   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:48.493120   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:48.987155   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:48.987174   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:48.987181   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:48.987186   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:48.990178   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:48.990196   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:48.990205   30321 round_trippers.go:580]     Audit-Id: 1147a35c-397a-4f83-beed-9d93c3e7cf40
	I0223 17:04:48.990227   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:48.990239   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:48.990250   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:48.990259   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:48.990267   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:48 GMT
	I0223 17:04:48.990350   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:48.990674   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:48.990682   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:48.990688   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:48.990696   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:48.992776   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:48.992791   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:48.992798   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:48.992805   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:48.992812   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:48.992820   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:48 GMT
	I0223 17:04:48.992826   30321 round_trippers.go:580]     Audit-Id: 1c4177d6-a7f0-4188-a26f-f20ac7e0a950
	I0223 17:04:48.992831   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:48.992918   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:48.993206   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:49.487133   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:49.487149   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:49.487155   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:49.487161   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:49.489850   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:49.489867   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:49.489876   30321 round_trippers.go:580]     Audit-Id: 2fb5081e-7e45-4bc1-ab45-101006a429fa
	I0223 17:04:49.489883   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:49.489889   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:49.489893   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:49.489899   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:49.489904   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:49 GMT
	I0223 17:04:49.489979   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:49.490327   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:49.490335   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:49.490346   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:49.490359   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:49.492923   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:49.492935   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:49.492941   30321 round_trippers.go:580]     Audit-Id: b2d59498-97c2-478a-8fc9-1260bc886beb
	I0223 17:04:49.492945   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:49.492950   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:49.492955   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:49.492960   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:49.492964   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:49 GMT
	I0223 17:04:49.493031   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:49.988410   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:49.988428   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:49.988434   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:49.988439   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:49.991686   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:49.991700   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:49.991706   30321 round_trippers.go:580]     Audit-Id: fe841beb-cf4e-4208-8222-7d33d5f0c270
	I0223 17:04:49.991711   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:49.991716   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:49.991721   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:49.991727   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:49.991735   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:49 GMT
	I0223 17:04:49.991805   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:49.992102   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:49.992110   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:49.992117   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:49.992124   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:49.994516   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:49.994528   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:49.994536   30321 round_trippers.go:580]     Audit-Id: ccd5c503-91de-48d1-8725-2d831b1a728c
	I0223 17:04:49.994543   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:49.994554   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:49.994568   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:49.994576   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:49.994589   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:49 GMT
	I0223 17:04:49.994821   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:50.488454   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:50.488469   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:50.488475   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:50.488480   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:50.491340   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:50.491353   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:50.491362   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:50 GMT
	I0223 17:04:50.491369   30321 round_trippers.go:580]     Audit-Id: d406bae5-6001-4060-abf9-82fa403f71fd
	I0223 17:04:50.491376   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:50.491383   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:50.491390   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:50.491401   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:50.491533   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:50.491831   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:50.491838   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:50.491844   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:50.491849   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:50.494431   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:50.494448   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:50.494469   30321 round_trippers.go:580]     Audit-Id: 905f57e6-d5f4-4979-a0f7-7963b75e15fa
	I0223 17:04:50.494479   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:50.494486   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:50.494492   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:50.494496   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:50.494502   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:50 GMT
	I0223 17:04:50.494655   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:50.987156   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:50.987176   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:50.987183   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:50.987188   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:50.990283   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:50.990297   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:50.990303   30321 round_trippers.go:580]     Audit-Id: 08fde636-e2a2-43f9-acc1-a0e82ff82505
	I0223 17:04:50.990308   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:50.990316   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:50.990322   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:50.990327   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:50.990331   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:50 GMT
	I0223 17:04:50.990405   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:50.990694   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:50.990700   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:50.990707   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:50.990713   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:50.992898   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:50.992911   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:50.992917   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:50.992922   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:50 GMT
	I0223 17:04:50.992926   30321 round_trippers.go:580]     Audit-Id: f88bc341-c3ea-4612-a233-c230aa898e32
	I0223 17:04:50.992932   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:50.992936   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:50.992945   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:50.993038   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:50.993239   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:51.487146   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:51.487205   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:51.487214   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:51.487223   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:51.489873   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:51.489888   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:51.489896   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:51.489903   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:51.489912   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:51.489919   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:51.489928   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:51 GMT
	I0223 17:04:51.489935   30321 round_trippers.go:580]     Audit-Id: 4e324f53-bda1-4fa2-88b6-55677a1a5719
	I0223 17:04:51.490099   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:51.490384   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:51.490391   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:51.490397   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:51.490402   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:51.492662   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:51.492675   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:51.492682   30321 round_trippers.go:580]     Audit-Id: 590fc50a-d03d-4928-9ac0-8015b6e1aa4a
	I0223 17:04:51.492688   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:51.492696   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:51.492703   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:51.492711   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:51.492716   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:51 GMT
	I0223 17:04:51.492803   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:51.988429   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:51.988442   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:51.988449   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:51.988454   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:51.991262   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:51.991280   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:51.991289   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:51.991302   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:51.991312   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:51.991319   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:51.991325   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:51 GMT
	I0223 17:04:51.991333   30321 round_trippers.go:580]     Audit-Id: a36e6404-412c-47c7-873b-751f3184d5f3
	I0223 17:04:51.991421   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:51.991716   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:51.991723   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:51.991729   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:51.991734   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:51.994015   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:51.994025   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:51.994031   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:51.994036   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:51.994041   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:51.994046   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:51 GMT
	I0223 17:04:51.994050   30321 round_trippers.go:580]     Audit-Id: 0816dcf1-d6ec-40f2-8373-be17980d696a
	I0223 17:04:51.994056   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:51.994111   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:52.488427   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:52.488443   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:52.488450   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:52.488455   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:52.491082   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:52.491100   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:52.491106   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:52.491111   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:52.491122   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:52 GMT
	I0223 17:04:52.491129   30321 round_trippers.go:580]     Audit-Id: 1eb742b1-2554-4024-aaf7-6cd60d723824
	I0223 17:04:52.491134   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:52.491142   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:52.491213   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:52.491509   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:52.491515   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:52.491521   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:52.491527   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:52.493598   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:52.493610   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:52.493616   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:52.493621   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:52.493630   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:52 GMT
	I0223 17:04:52.493637   30321 round_trippers.go:580]     Audit-Id: 17f68baa-da41-43f4-bfc2-67f2a3f7be73
	I0223 17:04:52.493642   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:52.493646   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:52.493846   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:52.987505   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:52.987519   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:52.987526   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:52.987532   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:52.990588   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:52.990600   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:52.990606   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:52.990612   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:52.990618   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:52.990625   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:52 GMT
	I0223 17:04:52.990632   30321 round_trippers.go:580]     Audit-Id: 3521a5da-1e50-406c-a1ce-47bf5d07bdff
	I0223 17:04:52.990637   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:52.991129   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:52.991410   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:52.991417   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:52.991423   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:52.991428   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:52.994095   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:52.994111   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:52.994121   30321 round_trippers.go:580]     Audit-Id: aff1e28d-3ed1-4f54-bd73-34f59d41300c
	I0223 17:04:52.994131   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:52.994140   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:52.994166   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:52.994212   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:52.994236   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:52 GMT
	I0223 17:04:52.994300   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:52.994500   30321 pod_ready.go:102] pod "coredns-787d4945fb-bvdps" in "kube-system" namespace has status "Ready":"False"
	I0223 17:04:53.487229   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:53.487243   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:53.487253   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:53.487261   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:53.490741   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:53.490753   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:53.490760   30321 round_trippers.go:580]     Audit-Id: 96d9e6ae-710b-4dba-b6ae-ca5b86c765f1
	I0223 17:04:53.490764   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:53.490769   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:53.490774   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:53.490780   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:53.490787   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:53 GMT
	I0223 17:04:53.490880   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:53.491285   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:53.491293   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:53.491302   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:53.491311   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:53.494134   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:53.494201   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:53.494216   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:53.494238   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:53 GMT
	I0223 17:04:53.494248   30321 round_trippers.go:580]     Audit-Id: fabcbfdb-c350-466d-b137-c03d10467209
	I0223 17:04:53.494254   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:53.494259   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:53.494264   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:53.494342   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:53.988176   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:53.988197   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:53.988204   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:53.988209   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:53.991495   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:53.991509   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:53.991515   30321 round_trippers.go:580]     Audit-Id: 90fe2a23-80e4-4eb2-abc9-11693888cad4
	I0223 17:04:53.991520   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:53.991526   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:53.991534   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:53.991540   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:53.991544   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:53 GMT
	I0223 17:04:53.991707   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-bvdps","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"5e0ff7c6-6e83-42f4-bcd9-47d435925027","resourceVersion":"390","creationTimestamp":"2023-02-24T01:04:39Z","deletionTimestamp":"2023-02-24T01:05:09Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 17:04:53.991991   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:53.991997   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:53.992003   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:53.992008   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:53.994192   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:53.994207   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:53.994215   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:53.994230   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:53.994238   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:53 GMT
	I0223 17:04:53.994261   30321 round_trippers.go:580]     Audit-Id: f8ea2450-cca7-438f-bb30-74ef92a97cba
	I0223 17:04:53.994285   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:53.994302   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:53.994381   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:54.489070   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-bvdps
	I0223 17:04:54.489087   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:54.489094   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:54.489099   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:54.491580   30321 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0223 17:04:54.491592   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:54.491597   30321 round_trippers.go:580]     Audit-Id: 7337d8c7-9cde-4157-9e8e-48d0b0c4a4ba
	I0223 17:04:54.491608   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:54.491614   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:54.491618   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:54.491623   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:54.491628   30321 round_trippers.go:580]     Content-Length: 216
	I0223 17:04:54.491637   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:54 GMT
	I0223 17:04:54.491652   30321 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-bvdps\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-bvdps","kind":"pods"},"code":404}
	I0223 17:04:54.491773   30321 pod_ready.go:97] error getting pod "coredns-787d4945fb-bvdps" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-bvdps" not found
	I0223 17:04:54.491784   30321 pod_ready.go:81] duration metric: took 14.011028799s waiting for pod "coredns-787d4945fb-bvdps" in "kube-system" namespace to be "Ready" ...
	E0223 17:04:54.491790   30321 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-bvdps" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-bvdps" not found
	I0223 17:04:54.491797   30321 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:54.491833   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nlz4z
	I0223 17:04:54.491838   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:54.491844   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:54.491849   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:54.494103   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:54.494115   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:54.494120   30321 round_trippers.go:580]     Audit-Id: 46dade29-c38d-4abc-89c3-3443e8b7aa4c
	I0223 17:04:54.494125   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:54.494130   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:54.494135   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:54.494141   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:54.494148   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:54 GMT
	I0223 17:04:54.494274   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"397","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 17:04:54.494589   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:54.494596   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:54.494603   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:54.494611   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:54.497218   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:54.497235   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:54.497247   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:54.497253   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:54 GMT
	I0223 17:04:54.497258   30321 round_trippers.go:580]     Audit-Id: 017fcd45-c6f2-482e-8186-b7281d97e8ce
	I0223 17:04:54.497262   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:54.497268   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:54.497273   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:54.497336   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:54.998734   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nlz4z
	I0223 17:04:54.998747   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:54.998754   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:54.998759   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.001577   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.001592   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.001598   30321 round_trippers.go:580]     Audit-Id: 3b6d9905-4c85-4f1e-8ea5-890ac7ca9c42
	I0223 17:04:55.001603   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.001608   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.001613   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.001618   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.001623   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.001685   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"397","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 17:04:55.001959   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.001965   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.001971   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.001976   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.003993   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.004004   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.004011   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.004016   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.004023   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.004044   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.004053   30321 round_trippers.go:580]     Audit-Id: 429a9b92-9067-40b4-ac74-9fe4b9865514
	I0223 17:04:55.004059   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.004121   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.498767   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nlz4z
	I0223 17:04:55.498789   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.498802   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.498811   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.502788   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:55.502804   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.502813   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.502820   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.502827   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.502833   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.502840   30321 round_trippers.go:580]     Audit-Id: 50109eea-dace-4cf8-a972-5958051ef888
	I0223 17:04:55.502851   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.502933   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 17:04:55.503317   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.503323   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.503328   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.503334   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.505342   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.505352   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.505357   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.505362   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.505367   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.505372   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.505377   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.505383   30321 round_trippers.go:580]     Audit-Id: 5152107d-1158-4cc2-bc9f-57d4a53f66c7
	I0223 17:04:55.505434   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.505613   30321 pod_ready.go:92] pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.505625   30321 pod_ready.go:81] duration metric: took 1.013829993s waiting for pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.505631   30321 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.505656   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/etcd-multinode-384000
	I0223 17:04:55.505661   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.505667   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.505673   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.507724   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.507734   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.507739   30321 round_trippers.go:580]     Audit-Id: a3dd6bc2-7146-44fc-892d-830c76e12cfc
	I0223 17:04:55.507744   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.507750   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.507755   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.507762   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.507768   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.507817   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-384000","namespace":"kube-system","uid":"c892d753-c892-4834-ba6f-34c4703cfa21","resourceVersion":"266","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"657e2e903e35ddf52c4f23cc480a0a6a","kubernetes.io/config.mirror":"657e2e903e35ddf52c4f23cc480a0a6a","kubernetes.io/config.seen":"2023-02-24T01:04:26.472791839Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 17:04:55.508038   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.508044   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.508050   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.508057   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.510229   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.510240   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.510248   30321 round_trippers.go:580]     Audit-Id: 4bba5138-84c9-408e-a7c7-8bbf5defbfd4
	I0223 17:04:55.510268   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.510277   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.510286   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.510293   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.510298   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.510350   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.510535   30321 pod_ready.go:92] pod "etcd-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.510540   30321 pod_ready.go:81] duration metric: took 4.904985ms waiting for pod "etcd-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.510548   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.510579   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-384000
	I0223 17:04:55.510583   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.510589   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.510595   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.512818   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.512829   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.512837   30321 round_trippers.go:580]     Audit-Id: 5e83b0dd-9d44-449a-9934-e776d61910a1
	I0223 17:04:55.512845   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.512850   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.512856   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.512863   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.512868   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.512941   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-384000","namespace":"kube-system","uid":"c42cb310-4d3e-44ed-aa9c-0f0bc12249d1","resourceVersion":"261","creationTimestamp":"2023-02-24T01:04:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b8de13a5a84c1ea264205bf0af6c4906","kubernetes.io/config.mirror":"b8de13a5a84c1ea264205bf0af6c4906","kubernetes.io/config.seen":"2023-02-24T01:04:17.403781278Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 17:04:55.513208   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.513214   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.513220   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.513225   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.515528   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.515537   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.515542   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.515548   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.515554   30321 round_trippers.go:580]     Audit-Id: 62a16788-8ae8-458c-94d6-da0d2fb90772
	I0223 17:04:55.515558   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.515564   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.515570   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.515702   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.515873   30321 pod_ready.go:92] pod "kube-apiserver-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.515878   30321 pod_ready.go:81] duration metric: took 5.324863ms waiting for pod "kube-apiserver-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.515884   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.515920   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-384000
	I0223 17:04:55.515926   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.515934   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.515942   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.518008   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.518017   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.518022   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.518027   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.518032   30321 round_trippers.go:580]     Audit-Id: 535b01f2-bb83-4118-9c9a-6247b64d1224
	I0223 17:04:55.518037   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.518041   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.518047   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.518120   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-384000","namespace":"kube-system","uid":"ac83dab3-bb77-4542-9452-419c3f5087cb","resourceVersion":"264","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb6cd6332c76e8ae5bfced6be99d18bd","kubernetes.io/config.mirror":"cb6cd6332c76e8ae5bfced6be99d18bd","kubernetes.io/config.seen":"2023-02-24T01:04:26.472807208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 17:04:55.518390   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.518396   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.518402   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.518407   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.520439   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.520449   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.520455   30321 round_trippers.go:580]     Audit-Id: 0b7b3abc-53ec-4198-8e4e-bab3fd1d4f9c
	I0223 17:04:55.520460   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.520465   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.520471   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.520475   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.520481   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.520526   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.520692   30321 pod_ready.go:92] pod "kube-controller-manager-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.520697   30321 pod_ready.go:81] duration metric: took 4.809125ms waiting for pod "kube-controller-manager-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.520705   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wmsxr" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.520734   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-wmsxr
	I0223 17:04:55.520739   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.520746   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.520752   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.523043   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.523055   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.523064   30321 round_trippers.go:580]     Audit-Id: 9ba134ed-916c-43ef-810b-9cf02c133feb
	I0223 17:04:55.523072   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.523080   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.523087   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.523094   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.523101   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.523157   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wmsxr","generateName":"kube-proxy-","namespace":"kube-system","uid":"6d046618-e274-4a16-8846-14837962c18d","resourceVersion":"391","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 17:04:55.523393   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.523399   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.523405   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.523411   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.525430   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.525439   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.525447   30321 round_trippers.go:580]     Audit-Id: dfcaef41-2870-4718-af15-de2813bbd7eb
	I0223 17:04:55.525453   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.525458   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.525464   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.525468   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.525474   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.525525   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.525693   30321 pod_ready.go:92] pod "kube-proxy-wmsxr" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.525699   30321 pod_ready.go:81] duration metric: took 4.989131ms waiting for pod "kube-proxy-wmsxr" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.525704   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.698837   30321 request.go:622] Waited for 173.072526ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-384000
	I0223 17:04:55.698866   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-384000
	I0223 17:04:55.698872   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.698879   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.698884   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.701143   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:04:55.701154   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.701160   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.701165   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.701170   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.701175   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.701180   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.701185   30321 round_trippers.go:580]     Audit-Id: 46f23c4d-0ccf-4f7b-b582-0604eb932c30
	I0223 17:04:55.701243   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-384000","namespace":"kube-system","uid":"f914009d-3787-433d-8e3e-2f597d741c7e","resourceVersion":"279","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7ca5a1853e73c27545a20428af78eb37","kubernetes.io/config.mirror":"7ca5a1853e73c27545a20428af78eb37","kubernetes.io/config.seen":"2023-02-24T01:04:26.472807884Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 17:04:55.899379   30321 request.go:622] Waited for 197.898951ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.899510   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:04:55.899526   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.899538   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.899547   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.903628   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:55.903642   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.903650   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.903657   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.903665   30321 round_trippers.go:580]     Audit-Id: 387c2cfb-ab5d-4889-b421-949d269d7e27
	I0223 17:04:55.903671   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.903677   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.903684   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.903756   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 17:04:55.903986   30321 pod_ready.go:92] pod "kube-scheduler-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:04:55.903992   30321 pod_ready.go:81] duration metric: took 378.288092ms waiting for pod "kube-scheduler-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:04:55.903999   30321 pod_ready.go:38] duration metric: took 15.43081074s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:04:55.904013   30321 api_server.go:51] waiting for apiserver process to appear ...
	I0223 17:04:55.904070   30321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:04:55.913724   30321 command_runner.go:130] > 1883
	I0223 17:04:55.914492   30321 api_server.go:71] duration metric: took 15.859632508s to wait for apiserver process to appear ...
	I0223 17:04:55.914501   30321 api_server.go:87] waiting for apiserver healthz status ...
	I0223 17:04:55.914513   30321 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58131/healthz ...
	I0223 17:04:55.919164   30321 api_server.go:278] https://127.0.0.1:58131/healthz returned 200:
	ok
	I0223 17:04:55.919195   30321 round_trippers.go:463] GET https://127.0.0.1:58131/version
	I0223 17:04:55.919200   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:55.919206   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:55.919213   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:55.920562   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:04:55.920571   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:55.920577   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:55.920582   30321 round_trippers.go:580]     Content-Length: 263
	I0223 17:04:55.920587   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:55 GMT
	I0223 17:04:55.920592   30321 round_trippers.go:580]     Audit-Id: b3ee4d8e-5bf5-4c55-9828-eb3d85629b10
	I0223 17:04:55.920599   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:55.920604   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:55.920609   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:55.920621   30321 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 17:04:55.920661   30321 api_server.go:140] control plane version: v1.26.1
	I0223 17:04:55.920667   30321 api_server.go:130] duration metric: took 6.162432ms to wait for apiserver health ...
	I0223 17:04:55.920671   30321 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 17:04:56.099525   30321 request.go:622] Waited for 178.804731ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:56.099558   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:56.099565   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:56.099572   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:56.099580   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:56.103049   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:56.103059   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:56.103064   30321 round_trippers.go:580]     Audit-Id: cbe26173-1ea5-401b-b0d0-b634efd79e7f
	I0223 17:04:56.103069   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:56.103074   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:56.103079   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:56.103087   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:56.103093   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:56 GMT
	I0223 17:04:56.104354   30321 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 17:04:56.105681   30321 system_pods.go:59] 8 kube-system pods found
	I0223 17:04:56.105696   30321 system_pods.go:61] "coredns-787d4945fb-nlz4z" [08aa5e04-355e-44b5-a80e-38f3491700e7] Running
	I0223 17:04:56.105701   30321 system_pods.go:61] "etcd-multinode-384000" [c892d753-c892-4834-ba6f-34c4703cfa21] Running
	I0223 17:04:56.105705   30321 system_pods.go:61] "kindnet-n4mpj" [6ef38cba-f7c8-4063-a588-dfd2146fd0a4] Running
	I0223 17:04:56.105708   30321 system_pods.go:61] "kube-apiserver-multinode-384000" [c42cb310-4d3e-44ed-aa9c-0f0bc12249d1] Running
	I0223 17:04:56.105712   30321 system_pods.go:61] "kube-controller-manager-multinode-384000" [ac83dab3-bb77-4542-9452-419c3f5087cb] Running
	I0223 17:04:56.105717   30321 system_pods.go:61] "kube-proxy-wmsxr" [6d046618-e274-4a16-8846-14837962c18d] Running
	I0223 17:04:56.105723   30321 system_pods.go:61] "kube-scheduler-multinode-384000" [f914009d-3787-433d-8e3e-2f597d741c7e] Running
	I0223 17:04:56.105727   30321 system_pods.go:61] "storage-provisioner" [babcd4ec-0d31-417d-a81b-137955e9c31e] Running
	I0223 17:04:56.105731   30321 system_pods.go:74] duration metric: took 185.057517ms to wait for pod list to return data ...
	I0223 17:04:56.105740   30321 default_sa.go:34] waiting for default service account to be created ...
	I0223 17:04:56.300366   30321 request.go:622] Waited for 194.494463ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/default/serviceaccounts
	I0223 17:04:56.300421   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/default/serviceaccounts
	I0223 17:04:56.300429   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:56.300441   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:56.300451   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:56.304478   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:04:56.304495   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:56.304505   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:56.304521   30321 round_trippers.go:580]     Content-Length: 261
	I0223 17:04:56.304529   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:56 GMT
	I0223 17:04:56.304537   30321 round_trippers.go:580]     Audit-Id: a6274a8c-ce66-435f-b002-b64602e18ead
	I0223 17:04:56.304543   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:56.304549   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:56.304559   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:56.304640   30321 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"02334fa8-6050-4807-b381-d1bf7926ee40","resourceVersion":"312","creationTimestamp":"2023-02-24T01:04:39Z"}}]}
	I0223 17:04:56.304782   30321 default_sa.go:45] found service account: "default"
	I0223 17:04:56.304791   30321 default_sa.go:55] duration metric: took 199.048533ms for default service account to be created ...
	I0223 17:04:56.304803   30321 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 17:04:56.500241   30321 request.go:622] Waited for 195.358089ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:56.500307   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:04:56.500386   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:56.500400   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:56.500447   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:56.505816   30321 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 17:04:56.505829   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:56.505835   30321 round_trippers.go:580]     Audit-Id: 1b56132a-de22-496e-86d1-a8b285f524ad
	I0223 17:04:56.505840   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:56.505846   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:56.505851   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:56.505859   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:56.505866   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:56 GMT
	I0223 17:04:56.506201   30321 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 17:04:56.507473   30321 system_pods.go:86] 8 kube-system pods found
	I0223 17:04:56.507482   30321 system_pods.go:89] "coredns-787d4945fb-nlz4z" [08aa5e04-355e-44b5-a80e-38f3491700e7] Running
	I0223 17:04:56.507486   30321 system_pods.go:89] "etcd-multinode-384000" [c892d753-c892-4834-ba6f-34c4703cfa21] Running
	I0223 17:04:56.507490   30321 system_pods.go:89] "kindnet-n4mpj" [6ef38cba-f7c8-4063-a588-dfd2146fd0a4] Running
	I0223 17:04:56.507493   30321 system_pods.go:89] "kube-apiserver-multinode-384000" [c42cb310-4d3e-44ed-aa9c-0f0bc12249d1] Running
	I0223 17:04:56.507497   30321 system_pods.go:89] "kube-controller-manager-multinode-384000" [ac83dab3-bb77-4542-9452-419c3f5087cb] Running
	I0223 17:04:56.507502   30321 system_pods.go:89] "kube-proxy-wmsxr" [6d046618-e274-4a16-8846-14837962c18d] Running
	I0223 17:04:56.507505   30321 system_pods.go:89] "kube-scheduler-multinode-384000" [f914009d-3787-433d-8e3e-2f597d741c7e] Running
	I0223 17:04:56.507509   30321 system_pods.go:89] "storage-provisioner" [babcd4ec-0d31-417d-a81b-137955e9c31e] Running
	I0223 17:04:56.507513   30321 system_pods.go:126] duration metric: took 202.708387ms to wait for k8s-apps to be running ...
	I0223 17:04:56.507519   30321 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 17:04:56.507576   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:04:56.517668   30321 system_svc.go:56] duration metric: took 10.144194ms WaitForService to wait for kubelet.
	I0223 17:04:56.517681   30321 kubeadm.go:578] duration metric: took 16.462829245s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 17:04:56.517696   30321 node_conditions.go:102] verifying NodePressure condition ...
	I0223 17:04:56.699314   30321 request.go:622] Waited for 181.572423ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/nodes
	I0223 17:04:56.699359   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes
	I0223 17:04:56.699368   30321 round_trippers.go:469] Request Headers:
	I0223 17:04:56.699378   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:04:56.699427   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:04:56.702684   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:04:56.702697   30321 round_trippers.go:577] Response Headers:
	I0223 17:04:56.702703   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:04:56.702708   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:04:56.702713   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:04:56.702718   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:04:56 GMT
	I0223 17:04:56.702723   30321 round_trippers.go:580]     Audit-Id: b3bf67a2-ffb8-4c1d-8d1d-c0f04b9a7c1a
	I0223 17:04:56.702728   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:04:56.702788   30321 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"408","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5007 chars]
	I0223 17:04:56.703003   30321 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:04:56.703014   30321 node_conditions.go:123] node cpu capacity is 6
	I0223 17:04:56.703024   30321 node_conditions.go:105] duration metric: took 185.326028ms to run NodePressure ...
	I0223 17:04:56.703032   30321 start.go:228] waiting for startup goroutines ...
	I0223 17:04:56.703038   30321 start.go:233] waiting for cluster config update ...
	I0223 17:04:56.703046   30321 start.go:242] writing updated cluster config ...
	I0223 17:04:56.725399   30321 out.go:177] 
	I0223 17:04:56.746945   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:04:56.747045   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:04:56.769778   30321 out.go:177] * Starting worker node multinode-384000-m02 in cluster multinode-384000
	I0223 17:04:56.812657   30321 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:04:56.835542   30321 out.go:177] * Pulling base image ...
	I0223 17:04:56.895701   30321 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:04:56.895717   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:56.895743   30321 cache.go:57] Caching tarball of preloaded images
	I0223 17:04:56.895977   30321 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:04:56.896004   30321 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 17:04:56.896140   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:04:56.952914   30321 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:04:56.952934   30321 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:04:56.952951   30321 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:04:56.952991   30321 start.go:364] acquiring machines lock for multinode-384000-m02: {Name:mk1527be69dd402dbd34e5a5f430e92116796580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:04:56.953144   30321 start.go:368] acquired machines lock for "multinode-384000-m02" in 140.195µs
	I0223 17:04:56.953171   30321 start.go:93] Provisioning new machine with config: &{Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 17:04:56.953241   30321 start.go:125] createHost starting for "m02" (driver="docker")
	I0223 17:04:56.974704   30321 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 17:04:56.974925   30321 start.go:159] libmachine.API.Create for "multinode-384000" (driver="docker")
	I0223 17:04:56.974960   30321 client.go:168] LocalClient.Create starting
	I0223 17:04:56.975194   30321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem
	I0223 17:04:56.975309   30321 main.go:141] libmachine: Decoding PEM data...
	I0223 17:04:56.975336   30321 main.go:141] libmachine: Parsing certificate...
	I0223 17:04:56.975438   30321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem
	I0223 17:04:56.975511   30321 main.go:141] libmachine: Decoding PEM data...
	I0223 17:04:56.975528   30321 main.go:141] libmachine: Parsing certificate...
	I0223 17:04:56.996371   30321 cli_runner.go:164] Run: docker network inspect multinode-384000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 17:04:57.052103   30321 network_create.go:76] Found existing network {name:multinode-384000 subnet:0xc0001464e0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0223 17:04:57.052150   30321 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-384000-m02" container
	I0223 17:04:57.052267   30321 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 17:04:57.109290   30321 cli_runner.go:164] Run: docker volume create multinode-384000-m02 --label name.minikube.sigs.k8s.io=multinode-384000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0223 17:04:57.164739   30321 oci.go:103] Successfully created a docker volume multinode-384000-m02
	I0223 17:04:57.164878   30321 cli_runner.go:164] Run: docker run --rm --name multinode-384000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-384000-m02 --entrypoint /usr/bin/test -v multinode-384000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 17:04:57.621060   30321 oci.go:107] Successfully prepared a docker volume multinode-384000-m02
	I0223 17:04:57.621098   30321 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:04:57.621110   30321 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 17:04:57.621231   30321 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-384000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 17:05:04.077983   30321 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-384000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.456736337s)
	I0223 17:05:04.078008   30321 kic.go:199] duration metric: took 6.456968 seconds to extract preloaded images to volume
	I0223 17:05:04.078132   30321 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 17:05:04.221365   30321 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-384000-m02 --name multinode-384000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-384000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-384000-m02 --network multinode-384000 --ip 192.168.58.3 --volume multinode-384000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 17:05:04.592292   30321 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Running}}
	I0223 17:05:04.655943   30321 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Status}}
	I0223 17:05:04.745777   30321 cli_runner.go:164] Run: docker exec multinode-384000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0223 17:05:04.851906   30321 oci.go:144] the created container "multinode-384000-m02" has a running status.
	I0223 17:05:04.851934   30321 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa...
	I0223 17:05:05.035027   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 17:05:05.035093   30321 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 17:05:05.140942   30321 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Status}}
	I0223 17:05:05.201256   30321 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 17:05:05.201277   30321 kic_runner.go:114] Args: [docker exec --privileged multinode-384000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 17:05:05.310563   30321 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Status}}
	I0223 17:05:05.368035   30321 machine.go:88] provisioning docker machine ...
	I0223 17:05:05.368075   30321 ubuntu.go:169] provisioning hostname "multinode-384000-m02"
	I0223 17:05:05.368180   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:05.426449   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:05.426841   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:05.426851   30321 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-384000-m02 && echo "multinode-384000-m02" | sudo tee /etc/hostname
	I0223 17:05:05.569180   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-384000-m02
	
	I0223 17:05:05.569295   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:05.628418   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:05.628769   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:05.628786   30321 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-384000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-384000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-384000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:05:05.764951   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:05:05.764973   30321 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:05:05.764987   30321 ubuntu.go:177] setting up certificates
	I0223 17:05:05.764993   30321 provision.go:83] configureAuth start
	I0223 17:05:05.765071   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000-m02
	I0223 17:05:05.822213   30321 provision.go:138] copyHostCerts
	I0223 17:05:05.822267   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:05:05.822324   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:05:05.822330   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:05:05.822445   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:05:05.822609   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:05:05.822639   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:05:05.822644   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:05:05.822706   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:05:05.822824   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:05:05.822858   30321 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:05:05.822862   30321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:05:05.822924   30321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:05:05.823045   30321 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.multinode-384000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-384000-m02]
	I0223 17:05:05.990734   30321 provision.go:172] copyRemoteCerts
	I0223 17:05:05.990799   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:05:05.990857   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:06.048588   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:06.143936   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 17:05:06.144018   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:05:06.161517   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 17:05:06.161592   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 17:05:06.179781   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 17:05:06.179873   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 17:05:06.197146   30321 provision.go:86] duration metric: configureAuth took 432.148846ms
	I0223 17:05:06.197159   30321 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:05:06.197315   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:05:06.197385   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:06.255268   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:06.255640   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:06.255652   30321 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:05:06.392853   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:05:06.392884   30321 ubuntu.go:71] root file system type: overlay
	I0223 17:05:06.392997   30321 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:05:06.393086   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:06.450967   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:06.451322   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:06.451378   30321 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:05:06.593100   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:05:06.593197   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:06.652287   30321 main.go:141] libmachine: Using SSH client type: native
	I0223 17:05:06.652691   30321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58195 <nil> <nil>}
	I0223 17:05:06.652705   30321 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:05:07.278808   30321 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:05:06.590454180 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 17:05:07.278835   30321 machine.go:91] provisioned docker machine in 1.910797134s
	I0223 17:05:07.278841   30321 client.go:171] LocalClient.Create took 10.303988911s
	I0223 17:05:07.278857   30321 start.go:167] duration metric: libmachine.API.Create for "multinode-384000" took 10.304049402s
	I0223 17:05:07.278864   30321 start.go:300] post-start starting for "multinode-384000-m02" (driver="docker")
	I0223 17:05:07.278874   30321 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:05:07.278956   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:05:07.279015   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:07.338828   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:07.434171   30321 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:05:07.437869   30321 command_runner.go:130] > NAME="Ubuntu"
	I0223 17:05:07.437879   30321 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 17:05:07.437883   30321 command_runner.go:130] > ID=ubuntu
	I0223 17:05:07.437887   30321 command_runner.go:130] > ID_LIKE=debian
	I0223 17:05:07.437894   30321 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 17:05:07.437899   30321 command_runner.go:130] > VERSION_ID="20.04"
	I0223 17:05:07.437903   30321 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 17:05:07.437909   30321 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 17:05:07.437914   30321 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 17:05:07.437921   30321 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 17:05:07.437926   30321 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 17:05:07.437936   30321 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 17:05:07.437990   30321 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:05:07.438006   30321 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:05:07.438013   30321 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:05:07.438017   30321 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:05:07.438023   30321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:05:07.438115   30321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:05:07.438288   30321 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:05:07.438293   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /etc/ssl/certs/248852.pem
	I0223 17:05:07.438480   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:05:07.445883   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:05:07.463353   30321 start.go:303] post-start completed in 184.476962ms
	I0223 17:05:07.463880   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000-m02
	I0223 17:05:07.521526   30321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/config.json ...
	I0223 17:05:07.521986   30321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:05:07.522116   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:07.579004   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:07.670593   30321 command_runner.go:130] > 6%!
	(MISSING)I0223 17:05:07.670679   30321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:05:07.675002   30321 command_runner.go:130] > 92G
	I0223 17:05:07.675361   30321 start.go:128] duration metric: createHost completed in 10.722232854s
	I0223 17:05:07.675374   30321 start.go:83] releasing machines lock for "multinode-384000-m02", held for 10.722341134s
	I0223 17:05:07.675460   30321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000-m02
	I0223 17:05:07.756730   30321 out.go:177] * Found network options:
	I0223 17:05:07.777737   30321 out.go:177]   - NO_PROXY=192.168.58.2
	W0223 17:05:07.799691   30321 proxy.go:119] fail to check proxy env: Error ip not in block
	W0223 17:05:07.799742   30321 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 17:05:07.799873   30321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 17:05:07.799981   30321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 17:05:07.799987   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:07.800095   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:05:07.862121   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:07.862223   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:05:08.010244   30321 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 17:05:08.010300   30321 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 17:05:08.010307   30321 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 17:05:08.010314   30321 command_runner.go:130] > Device: 10001bh/1048603d	Inode: 2885211     Links: 1
	I0223 17:05:08.010319   30321 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 17:05:08.010325   30321 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 17:05:08.010330   30321 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 17:05:08.010335   30321 command_runner.go:130] > Change: 2023-02-24 00:41:32.964225417 +0000
	I0223 17:05:08.010340   30321 command_runner.go:130] >  Birth: -
	I0223 17:05:08.010432   30321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 17:05:08.031166   30321 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 17:05:08.031244   30321 ssh_runner.go:195] Run: which cri-dockerd
	I0223 17:05:08.035387   30321 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 17:05:08.035474   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 17:05:08.042907   30321 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 17:05:08.056004   30321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 17:05:08.070780   30321 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 17:05:08.070824   30321 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 17:05:08.070835   30321 start.go:485] detecting cgroup driver to use...
	I0223 17:05:08.070849   30321 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:05:08.070931   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:05:08.083682   30321 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 17:05:08.083697   30321 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 17:05:08.084488   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 17:05:08.093536   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:05:08.102147   30321 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:05:08.102214   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:05:08.111079   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:05:08.119479   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:05:08.127924   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:05:08.136205   30321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:05:08.144205   30321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:05:08.152782   30321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:05:08.159430   30321 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 17:05:08.160102   30321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:05:08.167247   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:05:08.251984   30321 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:05:08.323259   30321 start.go:485] detecting cgroup driver to use...
	I0223 17:05:08.323279   30321 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:05:08.323342   30321 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:05:08.339960   30321 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 17:05:08.340119   30321 command_runner.go:130] > [Unit]
	I0223 17:05:08.340128   30321 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 17:05:08.340137   30321 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 17:05:08.340145   30321 command_runner.go:130] > BindsTo=containerd.service
	I0223 17:05:08.340153   30321 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 17:05:08.340158   30321 command_runner.go:130] > Wants=network-online.target
	I0223 17:05:08.340198   30321 command_runner.go:130] > Requires=docker.socket
	I0223 17:05:08.340215   30321 command_runner.go:130] > StartLimitBurst=3
	I0223 17:05:08.340227   30321 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 17:05:08.340235   30321 command_runner.go:130] > [Service]
	I0223 17:05:08.340240   30321 command_runner.go:130] > Type=notify
	I0223 17:05:08.340245   30321 command_runner.go:130] > Restart=on-failure
	I0223 17:05:08.340250   30321 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0223 17:05:08.340256   30321 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 17:05:08.340281   30321 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 17:05:08.340289   30321 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 17:05:08.340295   30321 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 17:05:08.340303   30321 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 17:05:08.340308   30321 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 17:05:08.340314   30321 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 17:05:08.340327   30321 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 17:05:08.340336   30321 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 17:05:08.340339   30321 command_runner.go:130] > ExecStart=
	I0223 17:05:08.340350   30321 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 17:05:08.340355   30321 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 17:05:08.340360   30321 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 17:05:08.340365   30321 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 17:05:08.340369   30321 command_runner.go:130] > LimitNOFILE=infinity
	I0223 17:05:08.340372   30321 command_runner.go:130] > LimitNPROC=infinity
	I0223 17:05:08.340378   30321 command_runner.go:130] > LimitCORE=infinity
	I0223 17:05:08.340382   30321 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 17:05:08.340387   30321 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 17:05:08.340390   30321 command_runner.go:130] > TasksMax=infinity
	I0223 17:05:08.340393   30321 command_runner.go:130] > TimeoutStartSec=0
	I0223 17:05:08.340399   30321 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 17:05:08.340402   30321 command_runner.go:130] > Delegate=yes
	I0223 17:05:08.340411   30321 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 17:05:08.340415   30321 command_runner.go:130] > KillMode=process
	I0223 17:05:08.340418   30321 command_runner.go:130] > [Install]
	I0223 17:05:08.340423   30321 command_runner.go:130] > WantedBy=multi-user.target
	I0223 17:05:08.340775   30321 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:05:08.340851   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:05:08.351191   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:05:08.364825   30321 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 17:05:08.364838   30321 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 17:05:08.365884   30321 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:05:08.464645   30321 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:05:08.557690   30321 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:05:08.557707   30321 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:05:08.571730   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:05:08.666053   30321 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:05:08.890228   30321 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:05:08.972036   30321 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 17:05:08.972111   30321 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 17:05:09.042258   30321 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:05:09.117012   30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:05:09.190875   30321 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 17:05:09.202546   30321 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 17:05:09.202636   30321 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 17:05:09.206733   30321 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 17:05:09.206746   30321 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 17:05:09.206753   30321 command_runner.go:130] > Device: 100023h/1048611d	Inode: 206         Links: 1
	I0223 17:05:09.206762   30321 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 17:05:09.206768   30321 command_runner.go:130] > Access: 2023-02-24 01:05:09.198454010 +0000
	I0223 17:05:09.206775   30321 command_runner.go:130] > Modify: 2023-02-24 01:05:09.198454010 +0000
	I0223 17:05:09.206784   30321 command_runner.go:130] > Change: 2023-02-24 01:05:09.199454010 +0000
	I0223 17:05:09.206795   30321 command_runner.go:130] >  Birth: -
	I0223 17:05:09.206816   30321 start.go:553] Will wait 60s for crictl version
	I0223 17:05:09.206888   30321 ssh_runner.go:195] Run: which crictl
	I0223 17:05:09.210420   30321 command_runner.go:130] > /usr/bin/crictl
	I0223 17:05:09.210468   30321 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 17:05:09.303225   30321 command_runner.go:130] > Version:  0.1.0
	I0223 17:05:09.303238   30321 command_runner.go:130] > RuntimeName:  docker
	I0223 17:05:09.303242   30321 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 17:05:09.303246   30321 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 17:05:09.305294   30321 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 17:05:09.305380   30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:05:09.330876   30321 command_runner.go:130] > 23.0.1
	I0223 17:05:09.332433   30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:05:09.356414   30321 command_runner.go:130] > 23.0.1
	I0223 17:05:09.400447   30321 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 17:05:09.421390   30321 out.go:177]   - env NO_PROXY=192.168.58.2
	I0223 17:05:09.442812   30321 cli_runner.go:164] Run: docker exec -t multinode-384000-m02 dig +short host.docker.internal
	I0223 17:05:09.559246   30321 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:05:09.559354   30321 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:05:09.563904   30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:05:09.574258   30321 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000 for IP: 192.168.58.3
	I0223 17:05:09.574275   30321 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:05:09.574462   30321 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:05:09.574527   30321 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:05:09.574543   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 17:05:09.574567   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 17:05:09.574585   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 17:05:09.574609   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 17:05:09.574707   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:05:09.574757   30321 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:05:09.574768   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:05:09.574827   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:05:09.574868   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:05:09.574898   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:05:09.574966   30321 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:05:09.575001   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem -> /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.575021   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.575044   30321 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.575346   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:05:09.592916   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:05:09.610527   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:05:09.628140   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:05:09.645408   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:05:09.662952   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:05:09.695538   30321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:05:09.713051   30321 ssh_runner.go:195] Run: openssl version
	I0223 17:05:09.718333   30321 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 17:05:09.718660   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:05:09.726987   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.730907   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.730990   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.731036   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:05:09.736123   30321 command_runner.go:130] > 3ec20f2e
	I0223 17:05:09.736564   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:05:09.744982   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:05:09.753293   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.757358   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.757469   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.757531   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:05:09.762804   30321 command_runner.go:130] > b5213941
	I0223 17:05:09.763200   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:05:09.771538   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:05:09.779871   30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.783786   30321 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.783818   30321 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.783859   30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:05:09.789022   30321 command_runner.go:130] > 51391683
	I0223 17:05:09.789291   30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:05:09.797739   30321 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:05:09.822341   30321 command_runner.go:130] > cgroupfs
	I0223 17:05:09.824054   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:05:09.824066   30321 cni.go:136] 2 nodes found, recommending kindnet
	I0223 17:05:09.824073   30321 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:05:09.824085   30321 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-384000 NodeName:multinode-384000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:05:09.824161   30321 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-384000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:05:09.824206   30321 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-384000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 17:05:09.824271   30321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 17:05:09.831786   30321 command_runner.go:130] > kubeadm
	I0223 17:05:09.831795   30321 command_runner.go:130] > kubectl
	I0223 17:05:09.831802   30321 command_runner.go:130] > kubelet
	I0223 17:05:09.832530   30321 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:05:09.832588   30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0223 17:05:09.840179   30321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0223 17:05:09.853476   30321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:05:09.866808   30321 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:05:09.870658   30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:05:09.880714   30321 host.go:66] Checking if "multinode-384000" exists ...
	I0223 17:05:09.880895   30321 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:05:09.880910   30321 start.go:301] JoinCluster: &{Name:multinode-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:05:09.880969   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0223 17:05:09.881029   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:05:09.939886   30321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:05:10.114312   30321 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mflu20.fcb217p8h9corip6 --discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 
	I0223 17:05:10.114363   30321 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 17:05:10.114395   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mflu20.fcb217p8h9corip6 --discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-384000-m02"
	I0223 17:05:10.153989   30321 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 17:05:10.266110   30321 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0223 17:05:10.266130   30321 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0223 17:05:10.292288   30321 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:05:10.292302   30321 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:05:10.292313   30321 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 17:05:10.374526   30321 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0223 17:05:11.887197   30321 command_runner.go:130] > This node has joined the cluster:
	I0223 17:05:11.887211   30321 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0223 17:05:11.887217   30321 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0223 17:05:11.887225   30321 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0223 17:05:11.890389   30321 command_runner.go:130] ! W0224 01:05:10.153407    1235 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 17:05:11.890404   30321 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 17:05:11.890413   30321 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:05:11.890430   30321 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mflu20.fcb217p8h9corip6 --discovery-token-ca-cert-hash sha256:66c5479caf151644de9cc25dfec0251e02b52644ccfb88194f97e8f8c0322961 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-384000-m02": (1.776040987s)
	I0223 17:05:11.890446   30321 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0223 17:05:12.019786   30321 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0223 17:05:12.019816   30321 start.go:303] JoinCluster complete in 2.138928977s
	I0223 17:05:12.019830   30321 cni.go:84] Creating CNI manager for ""
	I0223 17:05:12.019842   30321 cni.go:136] 2 nodes found, recommending kindnet
	I0223 17:05:12.019955   30321 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 17:05:12.024878   30321 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 17:05:12.024900   30321 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 17:05:12.024917   30321 command_runner.go:130] > Device: a6h/166d	Inode: 2757559     Links: 1
	I0223 17:05:12.024929   30321 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 17:05:12.024937   30321 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 17:05:12.024943   30321 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 17:05:12.024951   30321 command_runner.go:130] > Change: 2023-02-24 00:41:32.136225471 +0000
	I0223 17:05:12.024957   30321 command_runner.go:130] >  Birth: -
	I0223 17:05:12.025085   30321 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 17:05:12.025096   30321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 17:05:12.039239   30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 17:05:12.225961   30321 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 17:05:12.229143   30321 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 17:05:12.230957   30321 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 17:05:12.239205   30321 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 17:05:12.245445   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:05:12.245662   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:05:12.245949   30321 round_trippers.go:463] GET https://127.0.0.1:58131/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 17:05:12.245957   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.245963   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.245969   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.248382   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.248392   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.248398   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.248404   30321 round_trippers.go:580]     Audit-Id: ea2056a3-bae1-48cb-b05a-d18528f66c75
	I0223 17:05:12.248409   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.248414   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.248419   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.248425   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.248432   30321 round_trippers.go:580]     Content-Length: 291
	I0223 17:05:12.248443   30321 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"912d33b7-ad0b-4681-a3f3-ce58d0d7ef1a","resourceVersion":"430","creationTimestamp":"2023-02-24T01:04:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 17:05:12.248495   30321 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-384000" context rescaled to 1 replicas
	I0223 17:05:12.248511   30321 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 17:05:12.271711   30321 out.go:177] * Verifying Kubernetes components...
	I0223 17:05:12.314058   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:05:12.326008   30321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:05:12.386090   30321 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:05:12.386332   30321 kapi.go:59] client config for multinode-384000: &rest.Config{Host:"https://127.0.0.1:58131", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/multinode-384000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:05:12.386562   30321 node_ready.go:35] waiting up to 6m0s for node "multinode-384000-m02" to be "Ready" ...
	I0223 17:05:12.386604   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:12.386608   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.386614   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.386622   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.389431   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.389443   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.389449   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.389454   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.389459   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.389463   30321 round_trippers.go:580]     Audit-Id: bead5394-02db-4b8b-9355-a8baf5674402
	I0223 17:05:12.389468   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.389473   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.389541   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:12.389737   30321 node_ready.go:49] node "multinode-384000-m02" has status "Ready":"True"
	I0223 17:05:12.389743   30321 node_ready.go:38] duration metric: took 3.172945ms waiting for node "multinode-384000-m02" to be "Ready" ...
	I0223 17:05:12.389748   30321 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:05:12.389790   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods
	I0223 17:05:12.389795   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.389800   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.389807   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.393300   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:12.393313   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.393318   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.393324   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.393331   30321 round_trippers.go:580]     Audit-Id: 53584cab-94ca-498b-b86b-54a02b2adecb
	I0223 17:05:12.393337   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.393342   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.393350   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.394551   30321 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"477"},"items":[{"metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0223 17:05:12.396798   30321 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.396879   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nlz4z
	I0223 17:05:12.396887   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.396896   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.396904   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.399747   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.399760   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.399766   30321 round_trippers.go:580]     Audit-Id: 1c532d78-c7eb-4ef2-947a-f201f9ab9909
	I0223 17:05:12.399772   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.399777   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.399782   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.399790   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.399797   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.399860   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-nlz4z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"08aa5e04-355e-44b5-a80e-38f3491700e7","resourceVersion":"426","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"d638838d-80c0-419d-84f9-73793f47f13e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d638838d-80c0-419d-84f9-73793f47f13e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 17:05:12.400113   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:12.400121   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.400129   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.400137   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.402360   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.402370   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.402378   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.402391   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.402397   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.402404   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.402410   30321 round_trippers.go:580]     Audit-Id: 354377ae-0145-4719-bd52-9603b9baf89e
	I0223 17:05:12.402418   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.402669   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:12.402854   30321 pod_ready.go:92] pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:12.402861   30321 pod_ready.go:81] duration metric: took 6.045328ms waiting for pod "coredns-787d4945fb-nlz4z" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.402868   30321 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.402898   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/etcd-multinode-384000
	I0223 17:05:12.402904   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.402910   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.402918   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.405228   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.405240   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.405248   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.405254   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.405260   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.405266   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.405273   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.405279   30321 round_trippers.go:580]     Audit-Id: 77b942b3-e5e8-4bd8-8ecf-5cd4ba94ebd4
	I0223 17:05:12.405338   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-384000","namespace":"kube-system","uid":"c892d753-c892-4834-ba6f-34c4703cfa21","resourceVersion":"266","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"657e2e903e35ddf52c4f23cc480a0a6a","kubernetes.io/config.mirror":"657e2e903e35ddf52c4f23cc480a0a6a","kubernetes.io/config.seen":"2023-02-24T01:04:26.472791839Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 17:05:12.405580   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:12.405586   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.405592   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.405601   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.407513   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:05:12.407521   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.407526   30321 round_trippers.go:580]     Audit-Id: 17744f25-8693-4281-bf89-144bfeeaf1d9
	I0223 17:05:12.407531   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.407538   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.407544   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.407548   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.407554   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.407611   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:12.407784   30321 pod_ready.go:92] pod "etcd-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:12.407789   30321 pod_ready.go:81] duration metric: took 4.917084ms waiting for pod "etcd-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.407799   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.407829   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-384000
	I0223 17:05:12.407833   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.407839   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.407845   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.409860   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.409869   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.409876   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.409881   30321 round_trippers.go:580]     Audit-Id: e5ddbe1c-92ae-40f9-9dbb-d5800989f628
	I0223 17:05:12.409887   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.409892   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.409898   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.409904   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.409983   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-384000","namespace":"kube-system","uid":"c42cb310-4d3e-44ed-aa9c-0f0bc12249d1","resourceVersion":"261","creationTimestamp":"2023-02-24T01:04:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b8de13a5a84c1ea264205bf0af6c4906","kubernetes.io/config.mirror":"b8de13a5a84c1ea264205bf0af6c4906","kubernetes.io/config.seen":"2023-02-24T01:04:17.403781278Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 17:05:12.410242   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:12.410248   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.410254   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.410260   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.412472   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.412481   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.412487   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.412492   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.412500   30321 round_trippers.go:580]     Audit-Id: c96077ff-adfc-472f-bb65-e802e9b61025
	I0223 17:05:12.412505   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.412511   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.412517   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.412582   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:12.412748   30321 pod_ready.go:92] pod "kube-apiserver-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:12.412753   30321 pod_ready.go:81] duration metric: took 4.949693ms waiting for pod "kube-apiserver-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.412759   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.412786   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-384000
	I0223 17:05:12.412791   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.412797   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.412803   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.414996   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.415009   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.415015   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.415020   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.415029   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.415035   30321 round_trippers.go:580]     Audit-Id: b9382983-4d6e-43ea-9a06-be0c6b02d42a
	I0223 17:05:12.415040   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.415045   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.415132   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-384000","namespace":"kube-system","uid":"ac83dab3-bb77-4542-9452-419c3f5087cb","resourceVersion":"264","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb6cd6332c76e8ae5bfced6be99d18bd","kubernetes.io/config.mirror":"cb6cd6332c76e8ae5bfced6be99d18bd","kubernetes.io/config.seen":"2023-02-24T01:04:26.472807208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 17:05:12.415410   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:12.415417   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.415425   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.415433   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.418632   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:12.418643   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.418652   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.418657   30321 round_trippers.go:580]     Audit-Id: 630250ed-1dce-4c98-8381-43ac62ac4a39
	I0223 17:05:12.418662   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.418667   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.418672   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.418679   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.418962   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:12.419166   30321 pod_ready.go:92] pod "kube-controller-manager-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:12.419172   30321 pod_ready.go:81] duration metric: took 6.407903ms waiting for pod "kube-controller-manager-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.419178   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q28gd" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:12.588005   30321 request.go:622] Waited for 168.736696ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:12.588048   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:12.588055   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.588064   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.588072   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.590978   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:12.590991   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.590997   30321 round_trippers.go:580]     Audit-Id: 9dc39c88-79f2-474d-8f4a-d3217a686c41
	I0223 17:05:12.591001   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.591006   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.591012   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.591017   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.591035   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.591197   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"463","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 17:05:12.786943   30321 request.go:622] Waited for 195.482229ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:12.787102   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:12.787110   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:12.787122   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:12.787133   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:12.791223   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:12.791242   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:12.791250   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:12 GMT
	I0223 17:05:12.791258   30321 round_trippers.go:580]     Audit-Id: 5edde90b-f341-4547-959f-3dbfac67ca4e
	I0223 17:05:12.791266   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:12.791272   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:12.791280   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:12.791287   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:12.791367   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:13.291848   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:13.291876   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:13.291889   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:13.291898   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:13.295653   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:13.295666   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:13.295673   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:13.295681   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:13.295688   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:13 GMT
	I0223 17:05:13.295693   30321 round_trippers.go:580]     Audit-Id: 7a954963-1cce-4cd3-ab9e-3ee5a85eacad
	I0223 17:05:13.295697   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:13.295703   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:13.295763   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"463","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 17:05:13.296004   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:13.296010   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:13.296016   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:13.296030   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:13.298583   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:13.298596   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:13.298602   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:13.298617   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:13.298626   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:13.298634   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:13 GMT
	I0223 17:05:13.298646   30321 round_trippers.go:580]     Audit-Id: b3a81aa1-530a-4ff7-8cb7-67ef94eec823
	I0223 17:05:13.298661   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:13.298719   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:13.793899   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:13.793925   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:13.794018   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:13.794034   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:13.798151   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:13.798167   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:13.798175   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:13.798182   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:13.798194   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:13.798202   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:13.798209   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:13 GMT
	I0223 17:05:13.798216   30321 round_trippers.go:580]     Audit-Id: db81ceaf-56c2-4228-8379-59ca8d7862e5
	I0223 17:05:13.798306   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:13.798553   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:13.798558   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:13.798564   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:13.798576   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:13.801544   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:13.801558   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:13.801564   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:13.801569   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:13 GMT
	I0223 17:05:13.801573   30321 round_trippers.go:580]     Audit-Id: c388a579-34a7-4c6e-a4e2-4d5e6f2fe4af
	I0223 17:05:13.801577   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:13.801583   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:13.801588   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:13.801663   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:14.291809   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:14.291824   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:14.291843   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:14.291850   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:14.294441   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:14.294452   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:14.294461   30321 round_trippers.go:580]     Audit-Id: 31c1a410-41f8-4af3-ad86-b6dff5c28d10
	I0223 17:05:14.294472   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:14.294477   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:14.294482   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:14.294487   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:14.294492   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:14 GMT
	I0223 17:05:14.294547   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:14.294813   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:14.294820   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:14.294826   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:14.294831   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:14.297192   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:14.297202   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:14.297208   30321 round_trippers.go:580]     Audit-Id: d6b67bc2-a54f-46c3-8042-4d165fe8ceec
	I0223 17:05:14.297213   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:14.297217   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:14.297224   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:14.297233   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:14.297238   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:14 GMT
	I0223 17:05:14.297327   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:14.792083   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:14.792110   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:14.792123   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:14.792133   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:14.795831   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:14.795845   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:14.795850   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:14.795855   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:14.795860   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:14.795865   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:14.795870   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:14 GMT
	I0223 17:05:14.795874   30321 round_trippers.go:580]     Audit-Id: ae4a7367-d438-424c-a55f-70205c89bc50
	I0223 17:05:14.795935   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:14.796182   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:14.796189   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:14.796195   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:14.796200   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:14.798521   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:14.798533   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:14.798539   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:14.798546   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:14.798551   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:14.798556   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:14.798560   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:14 GMT
	I0223 17:05:14.798565   30321 round_trippers.go:580]     Audit-Id: d9ef0bc3-b3db-419a-8689-df682f462d3a
	I0223 17:05:14.798614   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:14.798779   30321 pod_ready.go:102] pod "kube-proxy-q28gd" in "kube-system" namespace has status "Ready":"False"
	I0223 17:05:15.291967   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:15.291992   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:15.292004   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:15.292014   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:15.296638   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:15.296653   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:15.296659   30321 round_trippers.go:580]     Audit-Id: 0f3151cd-7492-4803-84a8-a7b2593cfbff
	I0223 17:05:15.296664   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:15.296669   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:15.296673   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:15.296679   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:15.296685   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:15 GMT
	I0223 17:05:15.296747   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:15.297015   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:15.297021   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:15.297027   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:15.297033   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:15.299045   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:15.299054   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:15.299062   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:15.299068   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:15.299073   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:15.299078   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:15.299082   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:15 GMT
	I0223 17:05:15.299088   30321 round_trippers.go:580]     Audit-Id: 0f8438de-cf80-4e2b-9473-ece53f15e408
	I0223 17:05:15.299130   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:15.791770   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:15.791788   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:15.791797   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:15.791807   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:15.794876   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:15.794891   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:15.794904   30321 round_trippers.go:580]     Audit-Id: 2dc876d7-92fb-46d1-a597-ef5b39be1b87
	I0223 17:05:15.794914   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:15.794925   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:15.794933   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:15.794956   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:15.794971   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:15 GMT
	I0223 17:05:15.795069   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:15.795341   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:15.795348   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:15.795354   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:15.795364   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:15.797912   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:15.797928   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:15.797934   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:15.797940   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:15.797945   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:15 GMT
	I0223 17:05:15.797950   30321 round_trippers.go:580]     Audit-Id: e841ef89-e713-42a9-83d2-c30b3af5214e
	I0223 17:05:15.797956   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:15.797961   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:15.798010   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:16.292138   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:16.292164   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:16.292176   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:16.292186   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:16.296455   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:16.296470   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:16.296484   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:16.296492   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:16.296498   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:16 GMT
	I0223 17:05:16.296506   30321 round_trippers.go:580]     Audit-Id: 9bbc4e9b-6ca9-451a-92be-1ee9c8728ba4
	I0223 17:05:16.296514   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:16.296519   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:16.296698   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:16.296951   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:16.296957   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:16.296963   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:16.296969   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:16.298985   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:16.298995   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:16.299001   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:16.299007   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:16 GMT
	I0223 17:05:16.299012   30321 round_trippers.go:580]     Audit-Id: 53e1ff72-fd22-42e4-9732-2f5da00ce66f
	I0223 17:05:16.299018   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:16.299023   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:16.299028   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:16.299075   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:16.791962   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:16.791995   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:16.792008   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:16.792017   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:16.795678   30321 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 17:05:16.795690   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:16.795696   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:16.795702   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:16.795709   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:16.795719   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:16.795729   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:16 GMT
	I0223 17:05:16.795736   30321 round_trippers.go:580]     Audit-Id: 1c3d2a12-85c9-4fc8-9114-55cdee49a74b
	I0223 17:05:16.795922   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"479","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 17:05:16.796245   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:16.796253   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:16.796260   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:16.796265   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:16.798495   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:16.798507   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:16.798513   30321 round_trippers.go:580]     Audit-Id: 75c69917-1f17-4c7d-a4cb-1dce5c4ecefd
	I0223 17:05:16.798519   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:16.798524   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:16.798529   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:16.798533   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:16.798539   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:16 GMT
	I0223 17:05:16.798579   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:17.293215   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-q28gd
	I0223 17:05:17.293284   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.293299   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.293311   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.297399   30321 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 17:05:17.297413   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.297422   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.297433   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.297441   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.297447   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.297455   30321 round_trippers.go:580]     Audit-Id: e64aab21-337d-4605-9232-31a915e3e8f7
	I0223 17:05:17.297463   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.297537   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q28gd","generateName":"kube-proxy-","namespace":"kube-system","uid":"60e77e61-fe66-4e4e-8f80-c3bba6f3d319","resourceVersion":"490","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 17:05:17.297837   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000-m02
	I0223 17:05:17.297843   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.297849   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.297855   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.299815   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:05:17.299824   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.299829   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.299835   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.299840   30321 round_trippers.go:580]     Audit-Id: 489a7480-0712-4769-8bae-e4b3dfdd2940
	I0223 17:05:17.299844   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.299850   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.299854   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.299905   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000-m02","uid":"764f852e-4d41-4409-a9ff-ae41002a8a12","resourceVersion":"476","creationTimestamp":"2023-02-24T01:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0223 17:05:17.300058   30321 pod_ready.go:92] pod "kube-proxy-q28gd" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:17.300067   30321 pod_ready.go:81] duration metric: took 4.880938879s waiting for pod "kube-proxy-q28gd" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.300079   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wmsxr" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.300122   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-proxy-wmsxr
	I0223 17:05:17.300128   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.300134   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.300140   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.302739   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:17.302752   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.302761   30321 round_trippers.go:580]     Audit-Id: ee5468e2-2ffa-4ec5-8128-ee2e63879f80
	I0223 17:05:17.302790   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.302801   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.302809   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.302816   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.302824   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.302951   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wmsxr","generateName":"kube-proxy-","namespace":"kube-system","uid":"6d046618-e274-4a16-8846-14837962c18d","resourceVersion":"391","creationTimestamp":"2023-02-24T01:04:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a40a53ef-4501-4f1e-b33b-1c3a083df3a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a40a53ef-4501-4f1e-b33b-1c3a083df3a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 17:05:17.303293   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:17.303304   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.303313   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.303320   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.306226   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:17.306237   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.306243   30321 round_trippers.go:580]     Audit-Id: 8ef671db-d50b-4f6e-80b9-e6ffe1659f1f
	I0223 17:05:17.306248   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.306254   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.306258   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.306263   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.306268   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.306320   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:17.306506   30321 pod_ready.go:92] pod "kube-proxy-wmsxr" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:17.306512   30321 pod_ready.go:81] duration metric: took 6.418198ms waiting for pod "kube-proxy-wmsxr" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.306517   30321 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.306552   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-384000
	I0223 17:05:17.306556   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.306562   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.306569   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.308591   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:17.308601   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.308607   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.308612   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.308618   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.308623   30321 round_trippers.go:580]     Audit-Id: 5d223e83-7185-4965-a56b-f6c3be8f5bff
	I0223 17:05:17.308627   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.308632   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.308685   30321 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-384000","namespace":"kube-system","uid":"f914009d-3787-433d-8e3e-2f597d741c7e","resourceVersion":"279","creationTimestamp":"2023-02-24T01:04:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7ca5a1853e73c27545a20428af78eb37","kubernetes.io/config.mirror":"7ca5a1853e73c27545a20428af78eb37","kubernetes.io/config.seen":"2023-02-24T01:04:26.472807884Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:04:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 17:05:17.308908   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes/multinode-384000
	I0223 17:05:17.308914   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.308920   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.308926   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.310802   30321 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 17:05:17.310814   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.310820   30321 round_trippers.go:580]     Audit-Id: a6419459-b53e-4e20-bfde-cc0d2020cf8e
	I0223 17:05:17.310825   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.310830   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.310834   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.310839   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.310843   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.310906   30321 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:04:23Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 17:05:17.311107   30321 pod_ready.go:92] pod "kube-scheduler-multinode-384000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:05:17.311114   30321 pod_ready.go:81] duration metric: took 4.591821ms waiting for pod "kube-scheduler-multinode-384000" in "kube-system" namespace to be "Ready" ...
	I0223 17:05:17.311120   30321 pod_ready.go:38] duration metric: took 4.921421097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:05:17.311132   30321 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 17:05:17.311189   30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:05:17.321070   30321 system_svc.go:56] duration metric: took 9.933593ms WaitForService to wait for kubelet.
	I0223 17:05:17.321083   30321 kubeadm.go:578] duration metric: took 5.072613007s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 17:05:17.321097   30321 node_conditions.go:102] verifying NodePressure condition ...
	I0223 17:05:17.386575   30321 request.go:622] Waited for 65.441869ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58131/api/v1/nodes
	I0223 17:05:17.386620   30321 round_trippers.go:463] GET https://127.0.0.1:58131/api/v1/nodes
	I0223 17:05:17.386625   30321 round_trippers.go:469] Request Headers:
	I0223 17:05:17.386637   30321 round_trippers.go:473]     Accept: application/json, */*
	I0223 17:05:17.386644   30321 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 17:05:17.389374   30321 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 17:05:17.389385   30321 round_trippers.go:577] Response Headers:
	I0223 17:05:17.389392   30321 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4f9fd3f-0154-4502-96f6-710495872770
	I0223 17:05:17.389399   30321 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:05:17 GMT
	I0223 17:05:17.389406   30321 round_trippers.go:580]     Audit-Id: 71aace31-8e13-4764-b7ab-2975d451c1ca
	I0223 17:05:17.389411   30321 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 17:05:17.389423   30321 round_trippers.go:580]     Content-Type: application/json
	I0223 17:05:17.389432   30321 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 82a9549a-b7b9-454f-bf2f-d9ed28bc2b0b
	I0223 17:05:17.389682   30321 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"multinode-384000","uid":"de9e77f9-6097-48ed-ae7d-b970b0007ee7","resourceVersion":"433","creationTimestamp":"2023-02-24T01:04:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-384000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-384000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T17_04_27_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10175 chars]
	I0223 17:05:17.390006   30321 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:05:17.390014   30321 node_conditions.go:123] node cpu capacity is 6
	I0223 17:05:17.390028   30321 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:05:17.390033   30321 node_conditions.go:123] node cpu capacity is 6
	I0223 17:05:17.390036   30321 node_conditions.go:105] duration metric: took 68.936647ms to run NodePressure ...
	I0223 17:05:17.390044   30321 start.go:228] waiting for startup goroutines ...
	I0223 17:05:17.390068   30321 start.go:242] writing updated cluster config ...
	I0223 17:05:17.390377   30321 ssh_runner.go:195] Run: rm -f paused
	I0223 17:05:17.429574   30321 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 17:05:17.474432   30321 out.go:177] * Done! kubectl is now configured to use "multinode-384000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 01:04:08 UTC, end at Fri 2023-02-24 01:05:29 UTC. --
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522327381Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522353322Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522364554Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522382213Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522398747Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522429785Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522444776Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522462933Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522516501Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522901407Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.522944238Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.523375575Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.531192511Z" level=info msg="Loading containers: start."
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.609257090Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.642489297Z" level=info msg="Loading containers: done."
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.650852281Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.650917985Z" level=info msg="Daemon has completed initialization"
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.671691797Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 01:04:12 multinode-384000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.675903166Z" level=info msg="API listen on [::]:2376"
	Feb 24 01:04:12 multinode-384000 dockerd[831]: time="2023-02-24T01:04:12.682172168Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 24 01:04:54 multinode-384000 dockerd[831]: time="2023-02-24T01:04:54.002839887Z" level=info msg="ignoring event" container=e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:04:54 multinode-384000 dockerd[831]: time="2023-02-24T01:04:54.108713449Z" level=info msg="ignoring event" container=e3eb3324627b8d0da5874eff0e3736635555f1bfb77aa0e7e7ab4e3fbcfd5c95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:04:54 multinode-384000 dockerd[831]: time="2023-02-24T01:04:54.507268499Z" level=info msg="ignoring event" container=589074bbb37e751b1a2f17d08fcdfbbb9bf359c05d004737b94812bae43849d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:04:54 multinode-384000 dockerd[831]: time="2023-02-24T01:04:54.571292264Z" level=info msg="ignoring event" container=2a7643eec8796295af038e8573f1fe0f86f8f67946fcfd0db1d2a56f86e4dda3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	c2ba1f041ba50       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   8 seconds ago        Running             busybox                   0                   b17d7606007b6
	e76713fd6c03d       5185b96f0becf                                                                                         35 seconds ago       Running             coredns                   1                   f60fc43795c36
	5ce4fb65e4b85       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              46 seconds ago       Running             kindnet-cni               0                   06e2e420c3204
	883f8aa15acf4       6e38f40d628db                                                                                         48 seconds ago       Running             storage-provisioner       0                   53fa26659a4a9
	589074bbb37e7       5185b96f0becf                                                                                         48 seconds ago       Exited              coredns                   0                   2a7643eec8796
	5436fde5aabd2       46a6bb3c77ce0                                                                                         49 seconds ago       Running             kube-proxy                0                   a450caec62335
	459a8621d90b3       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   59322436f6077
	2e5771ae72b9c       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   bf54c4d8eb0c2
	cc1df7eeb82a3       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   3d852f0cc313d
	624942233c6b0       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   9d79d87a7a44a
	
	* 
	* ==> coredns [589074bbb37e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] 127.0.0.1:39210 - 24476 "HINFO IN 3769487758298892341.1515843376402541889. udp 57 false 512" - - 0 5.000074219s
	[ERROR] plugin/errors: 2 3769487758298892341.1515843376402541889. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[INFO] 127.0.0.1:53486 - 52777 "HINFO IN 3769487758298892341.1515843376402541889. udp 57 false 512" - - 0 5.000080673s
	[ERROR] plugin/errors: 2 3769487758298892341.1515843376402541889. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> coredns [e76713fd6c03] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:56540 - 12605 "HINFO IN 7526566672075535390.5503095815732551325. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016087575s
	[INFO] 10.244.0.3:44285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176842s
	[INFO] 10.244.0.3:41450 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046848152s
	[INFO] 10.244.0.3:40310 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003908724s
	[INFO] 10.244.0.3:45364 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.01223335s
	[INFO] 10.244.0.3:39652 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102391s
	[INFO] 10.244.0.3:45398 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00484566s
	[INFO] 10.244.0.3:51203 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098429s
	[INFO] 10.244.0.3:40689 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141606s
	[INFO] 10.244.0.3:51539 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004171789s
	[INFO] 10.244.0.3:46748 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200533s
	[INFO] 10.244.0.3:32832 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129906s
	[INFO] 10.244.0.3:40264 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134742s
	[INFO] 10.244.0.3:43870 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121786s
	[INFO] 10.244.0.3:36731 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076301s
	[INFO] 10.244.0.3:60911 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067362s
	[INFO] 10.244.0.3:45359 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105212s
	[INFO] 10.244.0.3:47275 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119536s
	[INFO] 10.244.0.3:44521 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139412s
	[INFO] 10.244.0.3:53853 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090156s
	[INFO] 10.244.0.3:59416 - 5 "PTR IN 2.65.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000185094s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-384000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-384000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510
	                    minikube.k8s.io/name=multinode-384000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T17_04_27_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:04:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-384000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:05:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:05:28 +0000   Fri, 24 Feb 2023 01:04:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:05:28 +0000   Fri, 24 Feb 2023 01:04:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:05:28 +0000   Fri, 24 Feb 2023 01:04:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:05:28 +0000   Fri, 24 Feb 2023 01:04:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-384000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    57e18f70-d77e-4b45-ae15-597714d7865f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-vb76c                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 coredns-787d4945fb-nlz4z                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     50s
	  kube-system                 etcd-multinode-384000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 kindnet-n4mpj                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      50s
	  kube-system                 kube-apiserver-multinode-384000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-controller-manager-multinode-384000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-proxy-wmsxr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-scheduler-multinode-384000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  72s (x4 over 72s)  kubelet          Node multinode-384000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s (x4 over 72s)  kubelet          Node multinode-384000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s (x3 over 72s)  kubelet          Node multinode-384000 status is now: NodeHasSufficientPID
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  63s                kubelet          Node multinode-384000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s                kubelet          Node multinode-384000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s                kubelet          Node multinode-384000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node multinode-384000 event: Registered Node multinode-384000 in Controller
	
	
	Name:               multinode-384000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-384000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:05:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-384000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:05:11 +0000   Fri, 24 Feb 2023 01:05:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:05:11 +0000   Fri, 24 Feb 2023 01:05:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:05:11 +0000   Fri, 24 Feb 2023 01:05:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:05:11 +0000   Fri, 24 Feb 2023 01:05:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-384000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    57e18f70-d77e-4b45-ae15-597714d7865f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-nlclz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 kindnet-2g647               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      18s
	  kube-system                 kube-proxy-q28gd            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x2 over 19s)  kubelet          Node multinode-384000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x2 over 19s)  kubelet          Node multinode-384000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x2 over 19s)  kubelet          Node multinode-384000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18s                kubelet          Node multinode-384000-m02 status is now: NodeReady
	  Normal  RegisteredNode           16s                node-controller  Node multinode-384000-m02 event: Registered Node multinode-384000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000095] FS-Cache: O-key=[8] 'c95bc40400000000'
	[  +0.000057] FS-Cache: N-cookie c=0000001c [p=00000014 fl=2 nc=0 na=1]
	[  +0.000081] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000007e1c140
	[  +0.000047] FS-Cache: N-key=[8] 'c95bc40400000000'
	[  +0.003060] FS-Cache: Duplicate cookie detected
	[  +0.000048] FS-Cache: O-cookie c=00000016 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000069] FS-Cache: O-cookie d=000000008a09309a{9p.inode} n=00000000934117af
	[  +0.000038] FS-Cache: O-key=[8] 'c95bc40400000000'
	[  +0.000044] FS-Cache: N-cookie c=0000001d [p=00000014 fl=2 nc=0 na=1]
	[  +0.000058] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000099c779f3
	[  +0.000182] FS-Cache: N-key=[8] 'c95bc40400000000'
	[  +3.488321] FS-Cache: Duplicate cookie detected
	[  +0.000062] FS-Cache: O-cookie c=00000017 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000055] FS-Cache: O-cookie d=000000008a09309a{9p.inode} n=000000004b032eea
	[  +0.000065] FS-Cache: O-key=[8] 'c85bc40400000000'
	[  +0.000035] FS-Cache: N-cookie c=00000020 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000042] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000047d46db5
	[  +0.000055] FS-Cache: N-key=[8] 'c85bc40400000000'
	[  +0.398634] FS-Cache: Duplicate cookie detected
	[  +0.000091] FS-Cache: O-cookie c=0000001a [p=00000014 fl=226 nc=0 na=1]
	[  +0.000050] FS-Cache: O-cookie d=000000008a09309a{9p.inode} n=000000004a75bbd2
	[  +0.000047] FS-Cache: O-key=[8] 'd35bc40400000000'
	[  +0.000054] FS-Cache: N-cookie c=00000021 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000051] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000062f74fb0
	[  +0.000064] FS-Cache: N-key=[8] 'd35bc40400000000'
	
	* 
	* ==> etcd [459a8621d90b] <==
	* {"level":"info","ts":"2023-02-24T01:04:21.479Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-02-24T01:04:21.479Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-24T01:04:21.480Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T01:04:21.480Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-24T01:04:21.480Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T01:04:21.480Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-24T01:04:22.274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T01:04:22.275Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-384000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T01:04:22.275Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:04:22.276Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:04:22.276Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:04:22.276Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T01:04:22.276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-24T01:04:22.277Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T01:05:01.901Z","caller":"traceutil/trace.go:171","msg":"trace[1455928292] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"185.969568ms","start":"2023-02-24T01:05:01.715Z","end":"2023-02-24T01:05:01.901Z","steps":["trace[1455928292] 'process raft request'  (duration: 185.861017ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:05:29 up  2:04,  0 users,  load average: 1.01, 1.07, 0.97
	Linux multinode-384000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [5ce4fb65e4b8] <==
	* I0224 01:04:44.150027       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0224 01:04:44.150136       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0224 01:04:44.150278       1 main.go:116] setting mtu 1500 for CNI 
	I0224 01:04:44.150425       1 main.go:146] kindnetd IP family: "ipv4"
	I0224 01:04:44.150467       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0224 01:04:44.850809       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:04:44.850892       1 main.go:227] handling current node
	I0224 01:04:54.864771       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:04:54.864817       1 main.go:227] handling current node
	I0224 01:05:04.876589       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:05:04.876634       1 main.go:227] handling current node
	I0224 01:05:14.880019       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:05:14.880056       1 main.go:227] handling current node
	I0224 01:05:14.880064       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 01:05:14.880068       1 main.go:250] Node multinode-384000-m02 has CIDR [10.244.1.0/24] 
	I0224 01:05:14.880165       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0224 01:05:24.891835       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 01:05:24.891899       1 main.go:227] handling current node
	I0224 01:05:24.891907       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 01:05:24.891914       1 main.go:250] Node multinode-384000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [cc1df7eeb82a] <==
	* I0224 01:04:23.413038       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 01:04:23.429505       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 01:04:23.429770       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 01:04:23.429785       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 01:04:23.429879       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 01:04:23.430017       1 cache.go:39] Caches are synced for autoregister controller
	I0224 01:04:23.430082       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 01:04:23.430322       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 01:04:23.430497       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 01:04:24.153495       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 01:04:24.334068       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0224 01:04:24.337358       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0224 01:04:24.337394       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 01:04:24.954618       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 01:04:24.983382       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 01:04:25.071632       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0224 01:04:25.076084       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0224 01:04:25.076718       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 01:04:25.079989       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 01:04:25.360436       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 01:04:26.364333       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 01:04:26.372607       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0224 01:04:26.378726       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 01:04:39.549644       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0224 01:04:39.698191       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [2e5771ae72b9] <==
	* I0224 01:04:38.904438       1 shared_informer.go:280] Caches are synced for expand
	I0224 01:04:38.927086       1 shared_informer.go:280] Caches are synced for stateful set
	I0224 01:04:38.944817       1 shared_informer.go:280] Caches are synced for disruption
	I0224 01:04:38.945939       1 shared_informer.go:280] Caches are synced for attach detach
	I0224 01:04:38.951044       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 01:04:39.316625       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 01:04:39.371785       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 01:04:39.371824       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0224 01:04:39.553572       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0224 01:04:39.574914       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0224 01:04:39.759545       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wmsxr"
	I0224 01:04:39.759560       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n4mpj"
	I0224 01:04:39.859885       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-bvdps"
	I0224 01:04:39.865008       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-nlz4z"
	I0224 01:04:39.880539       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-bvdps"
	W0224 01:05:11.126611       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-384000-m02" does not exist
	I0224 01:05:11.130038       1 range_allocator.go:372] Set node multinode-384000-m02 PodCIDR to [10.244.1.0/24]
	I0224 01:05:11.133246       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2g647"
	I0224 01:05:11.133577       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q28gd"
	W0224 01:05:11.739697       1 topologycache.go:232] Can't get CPU or zone information for multinode-384000-m02 node
	W0224 01:05:13.799497       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-384000-m02. Assuming now as a timestamp.
	I0224 01:05:13.799689       1 event.go:294] "Event occurred" object="multinode-384000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-384000-m02 event: Registered Node multinode-384000-m02 in Controller"
	I0224 01:05:18.590187       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0224 01:05:18.597761       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-nlclz"
	I0224 01:05:18.625598       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-vb76c"
	
	* 
	* ==> kube-proxy [5436fde5aabd] <==
	* I0224 01:04:40.782386       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0224 01:04:40.782470       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0224 01:04:40.782523       1 server_others.go:535] "Using iptables proxy"
	I0224 01:04:40.861005       1 server_others.go:176] "Using iptables Proxier"
	I0224 01:04:40.861028       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0224 01:04:40.861033       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0224 01:04:40.861048       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0224 01:04:40.861070       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0224 01:04:40.861404       1 server.go:655] "Version info" version="v1.26.1"
	I0224 01:04:40.861415       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:04:40.862241       1 config.go:226] "Starting endpoint slice config controller"
	I0224 01:04:40.862257       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0224 01:04:40.862272       1 config.go:317] "Starting service config controller"
	I0224 01:04:40.862276       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0224 01:04:40.862327       1 config.go:444] "Starting node config controller"
	I0224 01:04:40.862336       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0224 01:04:40.963446       1 shared_informer.go:280] Caches are synced for service config
	I0224 01:04:40.963446       1 shared_informer.go:280] Caches are synced for node config
	I0224 01:04:40.963460       1 shared_informer.go:280] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [624942233c6b] <==
	* W0224 01:04:23.370709       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 01:04:23.370837       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0224 01:04:24.271302       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0224 01:04:24.271362       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0224 01:04:24.346347       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0224 01:04:24.346368       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 01:04:24.469128       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0224 01:04:24.469174       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0224 01:04:24.497758       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0224 01:04:24.497816       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0224 01:04:24.511111       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 01:04:24.511150       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0224 01:04:24.542675       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0224 01:04:24.542773       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0224 01:04:24.543275       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0224 01:04:24.543371       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0224 01:04:24.651473       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0224 01:04:24.651639       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0224 01:04:24.728069       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 01:04:24.728117       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0224 01:04:24.773574       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0224 01:04:24.773622       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0224 01:04:24.781643       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 01:04:24.781722       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0224 01:04:26.565140       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 01:04:08 UTC, end at Fri 2023-02-24 01:05:30 UTC. --
	Feb 24 01:04:42 multinode-384000 kubelet[2244]: I0224 01:04:42.178563    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-bvdps" podStartSLOduration=3.178535392 pod.CreationTimestamp="2023-02-24 01:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:41.778935882 +0000 UTC m=+15.429274901" watchObservedRunningTime="2023-02-24 01:04:42.178535392 +0000 UTC m=+15.828874406"
	Feb 24 01:04:42 multinode-384000 kubelet[2244]: I0224 01:04:42.576503    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wmsxr" podStartSLOduration=3.57647565 pod.CreationTimestamp="2023-02-24 01:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:42.178760745 +0000 UTC m=+15.829099755" watchObservedRunningTime="2023-02-24 01:04:42.57647565 +0000 UTC m=+16.226814665"
	Feb 24 01:04:42 multinode-384000 kubelet[2244]: I0224 01:04:42.576726    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.576711978 pod.CreationTimestamp="2023-02-24 01:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:42.576541052 +0000 UTC m=+16.226880071" watchObservedRunningTime="2023-02-24 01:04:42.576711978 +0000 UTC m=+16.227050995"
	Feb 24 01:04:44 multinode-384000 kubelet[2244]: I0224 01:04:44.270040    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-nlz4z" podStartSLOduration=5.27001081 pod.CreationTimestamp="2023-02-24 01:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:42.980196679 +0000 UTC m=+16.630535702" watchObservedRunningTime="2023-02-24 01:04:44.27001081 +0000 UTC m=+17.920349830"
	Feb 24 01:04:47 multinode-384000 kubelet[2244]: I0224 01:04:47.301450    2244 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 01:04:47 multinode-384000 kubelet[2244]: I0224 01:04:47.350191    2244 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.299113    2244 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spt2b\" (UniqueName: \"kubernetes.io/projected/5e0ff7c6-6e83-42f4-bcd9-47d435925027-kube-api-access-spt2b\") pod \"5e0ff7c6-6e83-42f4-bcd9-47d435925027\" (UID: \"5e0ff7c6-6e83-42f4-bcd9-47d435925027\") "
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.299184    2244 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0ff7c6-6e83-42f4-bcd9-47d435925027-config-volume\") pod \"5e0ff7c6-6e83-42f4-bcd9-47d435925027\" (UID: \"5e0ff7c6-6e83-42f4-bcd9-47d435925027\") "
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: W0224 01:04:54.299318    2244 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/5e0ff7c6-6e83-42f4-bcd9-47d435925027/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.299431    2244 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0ff7c6-6e83-42f4-bcd9-47d435925027-config-volume" (OuterVolumeSpecName: "config-volume") pod "5e0ff7c6-6e83-42f4-bcd9-47d435925027" (UID: "5e0ff7c6-6e83-42f4-bcd9-47d435925027"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.300811    2244 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0ff7c6-6e83-42f4-bcd9-47d435925027-kube-api-access-spt2b" (OuterVolumeSpecName: "kube-api-access-spt2b") pod "5e0ff7c6-6e83-42f4-bcd9-47d435925027" (UID: "5e0ff7c6-6e83-42f4-bcd9-47d435925027"). InnerVolumeSpecName "kube-api-access-spt2b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.361015    2244 scope.go:115] "RemoveContainer" containerID="e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.369479    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-n4mpj" podStartSLOduration=-9.223372021485348e+09 pod.CreationTimestamp="2023-02-24 01:04:39 +0000 UTC" firstStartedPulling="2023-02-24 01:04:41.157383491 +0000 UTC m=+14.807722502" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:04:44.27028833 +0000 UTC m=+17.920627340" watchObservedRunningTime="2023-02-24 01:04:54.369427954 +0000 UTC m=+28.020139251"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.372144    2244 scope.go:115] "RemoveContainer" containerID="e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: E0224 01:04:54.372923    2244 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad" containerID="e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.372957    2244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad} err="failed to get container status \"e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad\": rpc error: code = Unknown desc = Error: No such container: e7f23861d2e3eccf62870d7622e717155e55aebf6bb208e0076d56ffc8a6fcad"
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.400182    2244 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-spt2b\" (UniqueName: \"kubernetes.io/projected/5e0ff7c6-6e83-42f4-bcd9-47d435925027-kube-api-access-spt2b\") on node \"multinode-384000\" DevicePath \"\""
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.400230    2244 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0ff7c6-6e83-42f4-bcd9-47d435925027-config-volume\") on node \"multinode-384000\" DevicePath \"\""
	Feb 24 01:04:54 multinode-384000 kubelet[2244]: I0224 01:04:54.573477    2244 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5e0ff7c6-6e83-42f4-bcd9-47d435925027 path="/var/lib/kubelet/pods/5e0ff7c6-6e83-42f4-bcd9-47d435925027/volumes"
	Feb 24 01:04:55 multinode-384000 kubelet[2244]: I0224 01:04:55.377637    2244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7643eec8796295af038e8573f1fe0f86f8f67946fcfd0db1d2a56f86e4dda3"
	Feb 24 01:05:18 multinode-384000 kubelet[2244]: I0224 01:05:18.629059    2244 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 01:05:18 multinode-384000 kubelet[2244]: E0224 01:05:18.629106    2244 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5e0ff7c6-6e83-42f4-bcd9-47d435925027" containerName="coredns"
	Feb 24 01:05:18 multinode-384000 kubelet[2244]: I0224 01:05:18.629135    2244 memory_manager.go:346] "RemoveStaleState removing state" podUID="5e0ff7c6-6e83-42f4-bcd9-47d435925027" containerName="coredns"
	Feb 24 01:05:18 multinode-384000 kubelet[2244]: I0224 01:05:18.763300    2244 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrn7x\" (UniqueName: \"kubernetes.io/projected/1a4a3aef-ff8d-45d3-9b2b-c661e7ee02af-kube-api-access-rrn7x\") pod \"busybox-6b86dd6d48-vb76c\" (UID: \"1a4a3aef-ff8d-45d3-9b2b-c661e7ee02af\") " pod="default/busybox-6b86dd6d48-vb76c"
	Feb 24 01:05:21 multinode-384000 kubelet[2244]: I0224 01:05:21.548230    2244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-vb76c" podStartSLOduration=-9.223372033306572e+09 pod.CreationTimestamp="2023-02-24 01:05:18 +0000 UTC" firstStartedPulling="2023-02-24 01:05:19.229822209 +0000 UTC m=+52.880533502" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:05:21.54804816 +0000 UTC m=+55.198759460" watchObservedRunningTime="2023-02-24 01:05:21.54820407 +0000 UTC m=+55.198915371"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-384000 -n multinode-384000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-384000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.427705580.exe start -p running-upgrade-826000 --memory=2200 --vm-driver=docker 
E0223 17:18:55.742821   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.427705580.exe start -p running-upgrade-826000 --memory=2200 --vm-driver=docker : exit status 70 (1m0.12935766s)

                                                
                                                
-- stdout --
	! [running-upgrade-826000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1880417242
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:19:17.354091482 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-826000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:19:36.856401168 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-826000", then "minikube start -p running-upgrade-826000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.29.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.29.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.21 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.61 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.98 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.87 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 25.26 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 34.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 41.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 50.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 59.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 70.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 84.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 94.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 102.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 109.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 119.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 135.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 140.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 150.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 159.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 184.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 189.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 203.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 212.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 236.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 255.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 265.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 274.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 307.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 316.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 332.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 338.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 347.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 354.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 373.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 390.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 396.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 403.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 412.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 418.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 426.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 433.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 441.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 471.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 481.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 497.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 505.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 521.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 538.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:19:36.856401168 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.427705580.exe start -p running-upgrade-826000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.427705580.exe start -p running-upgrade-826000 --memory=2200 --vm-driver=docker : exit status 70 (4.308221457s)

                                                
                                                
-- stdout --
	* [running-upgrade-826000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2142748617
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-826000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
E0223 17:19:44.813772   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.427705580.exe start -p running-upgrade-826000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.427705580.exe start -p running-upgrade-826000 --memory=2200 --vm-driver=docker : exit status 70 (3.229669327s)

                                                
                                                
-- stdout --
	* [running-upgrade-826000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1398092804
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-826000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-02-23 17:19:48.187558 -0800 PST m=+2363.541205930
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-826000
helpers_test.go:235: (dbg) docker inspect running-upgrade-826000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b2ef9b134f6ae624dfa228723d34211bb0dccf665f502c08ced16aa101443e2a",
	        "Created": "2023-02-24T01:19:25.527919026Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 558005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:19:25.75850124Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/b2ef9b134f6ae624dfa228723d34211bb0dccf665f502c08ced16aa101443e2a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b2ef9b134f6ae624dfa228723d34211bb0dccf665f502c08ced16aa101443e2a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b2ef9b134f6ae624dfa228723d34211bb0dccf665f502c08ced16aa101443e2a/hosts",
	        "LogPath": "/var/lib/docker/containers/b2ef9b134f6ae624dfa228723d34211bb0dccf665f502c08ced16aa101443e2a/b2ef9b134f6ae624dfa228723d34211bb0dccf665f502c08ced16aa101443e2a-json.log",
	        "Name": "/running-upgrade-826000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-826000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ada53bb938837c773c961bb91afa503c35c193e5fed5c643fef644c39654be89-init/diff:/var/lib/docker/overlay2/68b896e9e4b459964811ab116402ef03a6fece615c7f43c5a371b1fdccd87dc4/diff:/var/lib/docker/overlay2/6702894163c72587d4907ecbd3979d49838df9b235f6a7228816d07b9fee149e/diff:/var/lib/docker/overlay2/ada850987ad57f5b28c128c8163d0f2bec6b7de15e2639e553a0e02662b7a179/diff:/var/lib/docker/overlay2/641c086c35a9e1c6d7c63417e0cf190c35b85f9bd78df78a520ebc5950a561ee/diff:/var/lib/docker/overlay2/4d2461abf0ec25533691697489a139e6fb8b03ccbb7f89f95754ca3f136c45da/diff:/var/lib/docker/overlay2/466c324bfc28381e4fe6b2ceca55fd20ff500f16661d042c47071e367cafcdb8/diff:/var/lib/docker/overlay2/8be1fb96dfea68a547b7d249deee8a7438eb93160ca695efe5dc9eed2566dea3/diff:/var/lib/docker/overlay2/45249acd1b8805ba66680057a4dd9054c77c55f55ade363cf3710e1e57ac48bf/diff:/var/lib/docker/overlay2/01c9031850d5514f35753e13ece571ee1373089da11d8fba72282305e82faab1/diff:/var/lib/docker/overlay2/17da1d
5e19283c033a7ecea4fcd260aea844c9210c0d2a6cae701fdf7e3aab00/diff:/var/lib/docker/overlay2/d1addcd04da3d5d33698ba4af2c65dcbf116ab8d72caeb762d1b64e94e4e0377/diff:/var/lib/docker/overlay2/7c3c030d464d341b90c3c19d87bebb57512eb1b9381aed11c2f10349a7f0a544/diff:/var/lib/docker/overlay2/ddadc1040bac01a8e198b8577151852cb82b023454be50f2e6bdc37d3361fffc/diff:/var/lib/docker/overlay2/9e36b6ff5361ec746a15d0cf9f6a246f474393b156feb86be61c7336bd2e000f/diff:/var/lib/docker/overlay2/e7c79d81776cebe7978369bebf06bca9f7e076cc2140582d405c502bac6ec766/diff:/var/lib/docker/overlay2/ecafb5388e95add1f5e85d827eefdb04a6c9496c54e2305970319e64813ee7e4/diff:/var/lib/docker/overlay2/c95a2c14807a942da9d61633f9c21ece1a0d9a2215c2cca053e6a313fe15ee69/diff:/var/lib/docker/overlay2/462b12ee23b2bacc77ce13b3c4ebca18de99931e301ea20ddcf91f66fd51e98d/diff:/var/lib/docker/overlay2/66e33aca4c3abeb5ab250c4acbea655d919447d419bc9f676ad87de9723cf3d1/diff:/var/lib/docker/overlay2/bf3b3864fa03107e8dbc5202b32d4a19deba149f01ad111a4d653ab49f8f9548/diff:/var/lib/d
ocker/overlay2/66ed2cde4734b96e481801ca8f7e0575283cc7121014ef962d4d47acccae9087/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ada53bb938837c773c961bb91afa503c35c193e5fed5c643fef644c39654be89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ada53bb938837c773c961bb91afa503c35c193e5fed5c643fef644c39654be89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ada53bb938837c773c961bb91afa503c35c193e5fed5c643fef644c39654be89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-826000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-826000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-826000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-826000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-826000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "104ff8ff8ddab188ec87d00e5b4cd434b511f9a95a26f90fd531b70758dc1c00",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59418"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59419"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/104ff8ff8dda",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "b3aa5de475ea8efbdd88707b1b3a5b258fb43252b822f75c7c294135c8fe8a46",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "c2dba025af1153f939b032c53cf1836b587b950a1d8caa75a6cb026c36a7f3c9",
	                    "EndpointID": "b3aa5de475ea8efbdd88707b1b3a5b258fb43252b822f75c7c294135c8fe8a46",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-826000 -n running-upgrade-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-826000 -n running-upgrade-826000: exit status 6 (378.405225ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:19:48.614714   35297 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-826000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-826000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-826000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-826000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-826000: (2.310317854s)
--- FAIL: TestRunningBinaryUpgrade (74.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (583.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-238000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-238000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m9.894007671s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-238000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-238000 in cluster kubernetes-upgrade-238000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 17:20:52.410469   35681 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:20:52.410624   35681 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:20:52.410630   35681 out.go:309] Setting ErrFile to fd 2...
	I0223 17:20:52.410633   35681 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:20:52.410740   35681 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:20:52.412115   35681 out.go:303] Setting JSON to false
	I0223 17:20:52.430856   35681 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8427,"bootTime":1677193225,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 17:20:52.430940   35681 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 17:20:52.452344   35681 out.go:177] * [kubernetes-upgrade-238000] minikube v1.29.0 on Darwin 13.2
	I0223 17:20:52.494712   35681 notify.go:220] Checking for updates...
	I0223 17:20:52.494719   35681 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 17:20:52.516474   35681 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:20:52.537525   35681 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 17:20:52.558476   35681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 17:20:52.579751   35681 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 17:20:52.601601   35681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 17:20:52.623104   35681 config.go:182] Loaded profile config "cert-expiration-802000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:20:52.623195   35681 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 17:20:52.684074   35681 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 17:20:52.684200   35681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:20:52.825641   35681 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:20:52.733560171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:20:52.867318   35681 out.go:177] * Using the docker driver based on user configuration
	I0223 17:20:52.904529   35681 start.go:296] selected driver: docker
	I0223 17:20:52.904552   35681 start.go:857] validating driver "docker" against <nil>
	I0223 17:20:52.904609   35681 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 17:20:52.908172   35681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:20:53.047852   35681 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:20:52.957461886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:20:53.047976   35681 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 17:20:53.048152   35681 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 17:20:53.069874   35681 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 17:20:53.091416   35681 cni.go:84] Creating CNI manager for ""
	I0223 17:20:53.091439   35681 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 17:20:53.091448   35681 start_flags.go:319] config:
	{Name:kubernetes-upgrade-238000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-238000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:20:53.113493   35681 out.go:177] * Starting control plane node kubernetes-upgrade-238000 in cluster kubernetes-upgrade-238000
	I0223 17:20:53.134584   35681 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:20:53.155618   35681 out.go:177] * Pulling base image ...
	I0223 17:20:53.197438   35681 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 17:20:53.197483   35681 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:20:53.197535   35681 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 17:20:53.197555   35681 cache.go:57] Caching tarball of preloaded images
	I0223 17:20:53.197766   35681 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:20:53.197787   35681 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 17:20:53.198760   35681 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/config.json ...
	I0223 17:20:53.198929   35681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/config.json: {Name:mk9987747ae10b44e08daa77fc863497dd17dc00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:20:53.253596   35681 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:20:53.253613   35681 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:20:53.253651   35681 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:20:53.253706   35681 start.go:364] acquiring machines lock for kubernetes-upgrade-238000: {Name:mk2441e5d722fc72d266c863f46cd5fa5ce6ba49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:20:53.253867   35681 start.go:368] acquired machines lock for "kubernetes-upgrade-238000" in 149.152µs
	I0223 17:20:53.253900   35681 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-238000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-238000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 17:20:53.253972   35681 start.go:125] createHost starting for "" (driver="docker")
	I0223 17:20:53.297610   35681 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 17:20:53.297987   35681 start.go:159] libmachine.API.Create for "kubernetes-upgrade-238000" (driver="docker")
	I0223 17:20:53.298035   35681 client.go:168] LocalClient.Create starting
	I0223 17:20:53.298314   35681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem
	I0223 17:20:53.298410   35681 main.go:141] libmachine: Decoding PEM data...
	I0223 17:20:53.298441   35681 main.go:141] libmachine: Parsing certificate...
	I0223 17:20:53.298561   35681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem
	I0223 17:20:53.298629   35681 main.go:141] libmachine: Decoding PEM data...
	I0223 17:20:53.298646   35681 main.go:141] libmachine: Parsing certificate...
	I0223 17:20:53.299348   35681 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-238000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 17:20:53.353993   35681 cli_runner.go:211] docker network inspect kubernetes-upgrade-238000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 17:20:53.354097   35681 network_create.go:281] running [docker network inspect kubernetes-upgrade-238000] to gather additional debugging logs...
	I0223 17:20:53.354113   35681 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-238000
	W0223 17:20:53.407746   35681 cli_runner.go:211] docker network inspect kubernetes-upgrade-238000 returned with exit code 1
	I0223 17:20:53.407770   35681 network_create.go:284] error running [docker network inspect kubernetes-upgrade-238000]: docker network inspect kubernetes-upgrade-238000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-238000
	I0223 17:20:53.407802   35681 network_create.go:286] output of [docker network inspect kubernetes-upgrade-238000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-238000
	
	** /stderr **
	I0223 17:20:53.407896   35681 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 17:20:53.463581   35681 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 17:20:53.463932   35681 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ec24e0}
	I0223 17:20:53.463945   35681 network_create.go:123] attempt to create docker network kubernetes-upgrade-238000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 17:20:53.464019   35681 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 kubernetes-upgrade-238000
	W0223 17:20:53.518325   35681 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 kubernetes-upgrade-238000 returned with exit code 1
	W0223 17:20:53.518361   35681 network_create.go:148] failed to create docker network kubernetes-upgrade-238000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 kubernetes-upgrade-238000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 17:20:53.518376   35681 network_create.go:115] failed to create docker network kubernetes-upgrade-238000 192.168.58.0/24, will retry: subnet is taken
	I0223 17:20:53.519946   35681 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 17:20:53.520271   35681 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ec3600}
	I0223 17:20:53.520282   35681 network_create.go:123] attempt to create docker network kubernetes-upgrade-238000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 17:20:53.520363   35681 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 kubernetes-upgrade-238000
	W0223 17:20:53.575488   35681 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 kubernetes-upgrade-238000 returned with exit code 1
	W0223 17:20:53.575524   35681 network_create.go:148] failed to create docker network kubernetes-upgrade-238000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 kubernetes-upgrade-238000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 17:20:53.575536   35681 network_create.go:115] failed to create docker network kubernetes-upgrade-238000 192.168.67.0/24, will retry: subnet is taken
	I0223 17:20:53.577095   35681 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 17:20:53.577517   35681 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000579750}
	I0223 17:20:53.577535   35681 network_create.go:123] attempt to create docker network kubernetes-upgrade-238000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 17:20:53.577636   35681 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 kubernetes-upgrade-238000
	I0223 17:20:53.664651   35681 network_create.go:107] docker network kubernetes-upgrade-238000 192.168.76.0/24 created
	I0223 17:20:53.664682   35681 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-238000" container
	I0223 17:20:53.664823   35681 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 17:20:53.720132   35681 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-238000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 --label created_by.minikube.sigs.k8s.io=true
	I0223 17:20:53.773759   35681 oci.go:103] Successfully created a docker volume kubernetes-upgrade-238000
	I0223 17:20:53.773898   35681 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-238000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 --entrypoint /usr/bin/test -v kubernetes-upgrade-238000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 17:20:54.212925   35681 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-238000
	I0223 17:20:54.212963   35681 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 17:20:54.212979   35681 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 17:20:54.213102   35681 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-238000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 17:20:59.881627   35681 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-238000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (5.668425253s)
	I0223 17:20:59.881647   35681 kic.go:199] duration metric: took 5.668643 seconds to extract preloaded images to volume
	I0223 17:20:59.881767   35681 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 17:21:00.023739   35681 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-238000 --name kubernetes-upgrade-238000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-238000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-238000 --network kubernetes-upgrade-238000 --ip 192.168.76.2 --volume kubernetes-upgrade-238000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 17:21:00.373076   35681 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-238000 --format={{.State.Running}}
	I0223 17:21:00.432836   35681 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-238000 --format={{.State.Status}}
	I0223 17:21:00.495680   35681 cli_runner.go:164] Run: docker exec kubernetes-upgrade-238000 stat /var/lib/dpkg/alternatives/iptables
	I0223 17:21:00.605774   35681 oci.go:144] the created container "kubernetes-upgrade-238000" has a running status.
	I0223 17:21:00.605821   35681 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa...
	I0223 17:21:00.662225   35681 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 17:21:00.769455   35681 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-238000 --format={{.State.Status}}
	I0223 17:21:00.829982   35681 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 17:21:00.830002   35681 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-238000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 17:21:00.940111   35681 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-238000 --format={{.State.Status}}
	I0223 17:21:00.996670   35681 machine.go:88] provisioning docker machine ...
	I0223 17:21:00.996718   35681 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-238000"
	I0223 17:21:00.996828   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:01.053000   35681 main.go:141] libmachine: Using SSH client type: native
	I0223 17:21:01.053394   35681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59528 <nil> <nil>}
	I0223 17:21:01.053407   35681 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-238000 && echo "kubernetes-upgrade-238000" | sudo tee /etc/hostname
	I0223 17:21:01.196247   35681 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-238000
	
	I0223 17:21:01.196348   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:01.255103   35681 main.go:141] libmachine: Using SSH client type: native
	I0223 17:21:01.255464   35681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59528 <nil> <nil>}
	I0223 17:21:01.255477   35681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-238000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-238000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-238000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:21:01.388311   35681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:21:01.388336   35681 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:21:01.388359   35681 ubuntu.go:177] setting up certificates
	I0223 17:21:01.388364   35681 provision.go:83] configureAuth start
	I0223 17:21:01.388452   35681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-238000
	I0223 17:21:01.444915   35681 provision.go:138] copyHostCerts
	I0223 17:21:01.445018   35681 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:21:01.445027   35681 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:21:01.445139   35681 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:21:01.445328   35681 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:21:01.445334   35681 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:21:01.445396   35681 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:21:01.445538   35681 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:21:01.445544   35681 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:21:01.445603   35681 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:21:01.445711   35681 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-238000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-238000]
	I0223 17:21:01.576601   35681 provision.go:172] copyRemoteCerts
	I0223 17:21:01.576666   35681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:21:01.576721   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:01.635336   35681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:21:01.728384   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:21:01.746368   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0223 17:21:01.763655   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 17:21:01.781203   35681 provision.go:86] duration metric: configureAuth took 392.824707ms
	I0223 17:21:01.781226   35681 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:21:01.781374   35681 config.go:182] Loaded profile config "kubernetes-upgrade-238000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 17:21:01.781437   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:01.839627   35681 main.go:141] libmachine: Using SSH client type: native
	I0223 17:21:01.840006   35681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59528 <nil> <nil>}
	I0223 17:21:01.840019   35681 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:21:01.974186   35681 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:21:01.974199   35681 ubuntu.go:71] root file system type: overlay
	I0223 17:21:01.974315   35681 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:21:01.974402   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:02.031363   35681 main.go:141] libmachine: Using SSH client type: native
	I0223 17:21:02.031732   35681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59528 <nil> <nil>}
	I0223 17:21:02.031782   35681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:21:02.174418   35681 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:21:02.174510   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:02.231326   35681 main.go:141] libmachine: Using SSH client type: native
	I0223 17:21:02.231673   35681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59528 <nil> <nil>}
	I0223 17:21:02.231686   35681 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:21:02.905084   35681 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:21:02.172907474 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 17:21:02.905105   35681 machine.go:91] provisioned docker machine in 1.90840902s
	I0223 17:21:02.905111   35681 client.go:171] LocalClient.Create took 9.607028186s
	I0223 17:21:02.905127   35681 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-238000" took 9.607100745s
	I0223 17:21:02.905136   35681 start.go:300] post-start starting for "kubernetes-upgrade-238000" (driver="docker")
	I0223 17:21:02.905142   35681 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:21:02.905234   35681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:21:02.905291   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:02.963931   35681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:21:03.059418   35681 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:21:03.063139   35681 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:21:03.063159   35681 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:21:03.063166   35681 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:21:03.063171   35681 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:21:03.063181   35681 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:21:03.063279   35681 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:21:03.063462   35681 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:21:03.063661   35681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:21:03.071188   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:21:03.088572   35681 start.go:303] post-start completed in 183.426482ms
	I0223 17:21:03.089070   35681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-238000
	I0223 17:21:03.147405   35681 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/config.json ...
	I0223 17:21:03.148276   35681 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:21:03.148340   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:03.209164   35681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:21:03.302403   35681 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:21:03.307212   35681 start.go:128] duration metric: createHost completed in 10.053187332s
	I0223 17:21:03.307229   35681 start.go:83] releasing machines lock for "kubernetes-upgrade-238000", held for 10.053309673s
	I0223 17:21:03.307311   35681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-238000
	I0223 17:21:03.513814   35681 ssh_runner.go:195] Run: cat /version.json
	I0223 17:21:03.513834   35681 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 17:21:03.513934   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:03.513960   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:03.576851   35681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:21:03.577026   35681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:21:03.925616   35681 ssh_runner.go:195] Run: systemctl --version
	I0223 17:21:03.930840   35681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 17:21:03.936123   35681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 17:21:03.958045   35681 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 17:21:03.958118   35681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 17:21:03.974635   35681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 17:21:03.990091   35681 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 17:21:03.990105   35681 start.go:485] detecting cgroup driver to use...
	I0223 17:21:03.990116   35681 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:21:03.990185   35681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:21:04.003992   35681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 17:21:04.013306   35681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:21:04.022173   35681 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:21:04.022235   35681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:21:04.032865   35681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:21:04.041284   35681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:21:04.053798   35681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:21:04.062608   35681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:21:04.070510   35681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:21:04.079636   35681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:21:04.087672   35681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:21:04.095805   35681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:21:04.170444   35681 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:21:04.248446   35681 start.go:485] detecting cgroup driver to use...
	I0223 17:21:04.248472   35681 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:21:04.248535   35681 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:21:04.261364   35681 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:21:04.261437   35681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:21:04.273567   35681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:21:04.292539   35681 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:21:04.411247   35681 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:21:04.497352   35681 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:21:04.497373   35681 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:21:04.513645   35681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:21:04.580184   35681 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:21:04.889425   35681 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:21:04.919007   35681 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:21:04.992191   35681 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0223 17:21:04.992286   35681 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-238000 dig +short host.docker.internal
	I0223 17:21:05.107644   35681 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:21:05.107761   35681 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:21:05.112352   35681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:21:05.123064   35681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:21:05.184221   35681 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 17:21:05.184310   35681 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:21:05.207329   35681 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 17:21:05.207352   35681 docker.go:560] Images already preloaded, skipping extraction
	I0223 17:21:05.207457   35681 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:21:05.229450   35681 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 17:21:05.229464   35681 cache_images.go:84] Images are preloaded, skipping loading
	I0223 17:21:05.229568   35681 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:21:05.258271   35681 cni.go:84] Creating CNI manager for ""
	I0223 17:21:05.258289   35681 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 17:21:05.258307   35681 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:21:05.258327   35681 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-238000 NodeName:kubernetes-upgrade-238000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:21:05.258434   35681 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-238000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-238000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:21:05.258509   35681 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-238000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-238000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 17:21:05.258574   35681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 17:21:05.266437   35681 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:21:05.266516   35681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 17:21:05.274213   35681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0223 17:21:05.287338   35681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:21:05.300789   35681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0223 17:21:05.314042   35681 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:21:05.318149   35681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:21:05.328331   35681 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000 for IP: 192.168.76.2
	I0223 17:21:05.328348   35681 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:21:05.328529   35681 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:21:05.328589   35681 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:21:05.328642   35681 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.key
	I0223 17:21:05.328657   35681 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.crt with IP's: []
	I0223 17:21:05.552676   35681 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.crt ...
	I0223 17:21:05.552690   35681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.crt: {Name:mkd62f93985dbe0337540f912e3233d9bf4b2f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:21:05.553003   35681 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.key ...
	I0223 17:21:05.553011   35681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.key: {Name:mk53a6629d9005661a384d3fdbd474a683b58dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:21:05.553203   35681 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.key.31bdca25
	I0223 17:21:05.553221   35681 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 17:21:05.721882   35681 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.crt.31bdca25 ...
	I0223 17:21:05.721898   35681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.crt.31bdca25: {Name:mk982a82766e6004776254873a49fec7264c2323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:21:05.722208   35681 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.key.31bdca25 ...
	I0223 17:21:05.722216   35681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.key.31bdca25: {Name:mk862e99b58406d2f9649ec1dc76f69c5fdecde4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:21:05.722411   35681 certs.go:333] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.crt
	I0223 17:21:05.722584   35681 certs.go:337] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.key
	I0223 17:21:05.722749   35681 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.key
	I0223 17:21:05.722767   35681 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.crt with IP's: []
	I0223 17:21:05.788971   35681 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.crt ...
	I0223 17:21:05.788987   35681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.crt: {Name:mk143720ef3462df2321aa0db69dda161e193932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:21:05.789276   35681 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.key ...
	I0223 17:21:05.789284   35681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.key: {Name:mk3c10640d3ddd6dc5a9d9f2d882d09614eac30d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:21:05.789697   35681 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:21:05.789747   35681 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:21:05.789759   35681 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:21:05.789793   35681 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:21:05.789825   35681 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:21:05.789858   35681 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:21:05.789927   35681 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:21:05.790434   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 17:21:05.809571   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 17:21:05.828826   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 17:21:05.848959   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 17:21:05.868476   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:21:05.887369   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:21:05.904996   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:21:05.925257   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:21:05.943284   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:21:05.961611   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:21:05.979030   35681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:21:05.996616   35681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 17:21:06.011052   35681 ssh_runner.go:195] Run: openssl version
	I0223 17:21:06.017619   35681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:21:06.027649   35681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:21:06.032544   35681 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:21:06.032602   35681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:21:06.038838   35681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:21:06.047940   35681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:21:06.057286   35681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:21:06.062532   35681 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:21:06.062613   35681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:21:06.069081   35681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:21:06.078069   35681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:21:06.086791   35681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:21:06.091001   35681 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:21:06.091049   35681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:21:06.096655   35681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:21:06.104987   35681 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-238000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-238000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP:}
	I0223 17:21:06.105098   35681 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:21:06.125476   35681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 17:21:06.133352   35681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 17:21:06.141083   35681 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 17:21:06.141138   35681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:21:06.148653   35681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:21:06.148675   35681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 17:21:06.197671   35681 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 17:21:06.197709   35681 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 17:21:06.377388   35681 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:21:06.377475   35681 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:21:06.377561   35681 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:21:06.555107   35681 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:21:06.555890   35681 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:21:06.562206   35681 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 17:21:06.631920   35681 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:21:06.676392   35681 out.go:204]   - Generating certificates and keys ...
	I0223 17:21:06.676531   35681 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 17:21:06.676638   35681 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 17:21:06.683385   35681 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 17:21:06.902266   35681 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 17:21:07.077107   35681 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 17:21:07.142917   35681 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 17:21:07.226545   35681 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 17:21:07.226740   35681 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-238000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0223 17:21:07.416378   35681 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 17:21:07.439070   35681 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-238000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0223 17:21:07.543320   35681 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 17:21:07.677166   35681 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 17:21:07.803125   35681 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 17:21:07.803218   35681 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:21:08.025974   35681 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:21:08.105109   35681 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:21:08.162586   35681 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:21:08.252727   35681 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:21:08.253428   35681 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:21:08.273476   35681 out.go:204]   - Booting up control plane ...
	I0223 17:21:08.273615   35681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:21:08.273751   35681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:21:08.273856   35681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:21:08.274008   35681 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:21:08.274217   35681 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:21:48.263897   35681 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 17:21:48.264831   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:21:48.265107   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:21:53.265447   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:21:53.265599   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:22:03.266405   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:22:03.266668   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:22:23.267042   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:22:23.267285   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:23:03.267756   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:23:03.267924   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:23:03.267931   35681 kubeadm.go:322] 
	I0223 17:23:03.267984   35681 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 17:23:03.268019   35681 kubeadm.go:322] 	timed out waiting for the condition
	I0223 17:23:03.268028   35681 kubeadm.go:322] 
	I0223 17:23:03.268075   35681 kubeadm.go:322] This error is likely caused by:
	I0223 17:23:03.268126   35681 kubeadm.go:322] 	- The kubelet is not running
	I0223 17:23:03.268225   35681 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 17:23:03.268234   35681 kubeadm.go:322] 
	I0223 17:23:03.268314   35681 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 17:23:03.268342   35681 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 17:23:03.268369   35681 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 17:23:03.268376   35681 kubeadm.go:322] 
	I0223 17:23:03.268455   35681 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 17:23:03.268543   35681 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 17:23:03.268604   35681 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 17:23:03.268643   35681 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 17:23:03.268707   35681 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 17:23:03.268733   35681 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 17:23:03.271286   35681 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 17:23:03.271364   35681 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 17:23:03.271499   35681 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 17:23:03.271595   35681 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:23:03.271665   35681 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 17:23:03.271726   35681 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 17:23:03.271884   35681 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-238000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-238000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-238000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-238000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 17:23:03.271924   35681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 17:23:03.683214   35681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:23:03.693743   35681 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 17:23:03.693809   35681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:23:03.701962   35681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:23:03.701989   35681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 17:23:03.753449   35681 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 17:23:03.753532   35681 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 17:23:03.932628   35681 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:23:03.932728   35681 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:23:03.932809   35681 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:23:04.098927   35681 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:23:04.099696   35681 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:23:04.107034   35681 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 17:23:04.176025   35681 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:23:04.197624   35681 out.go:204]   - Generating certificates and keys ...
	I0223 17:23:04.197730   35681 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 17:23:04.197828   35681 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 17:23:04.197904   35681 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 17:23:04.197978   35681 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 17:23:04.198065   35681 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 17:23:04.198130   35681 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 17:23:04.198205   35681 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 17:23:04.198256   35681 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 17:23:04.198364   35681 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 17:23:04.198432   35681 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 17:23:04.198465   35681 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 17:23:04.198508   35681 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:23:04.240474   35681 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:23:04.453984   35681 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:23:04.593069   35681 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:23:04.679508   35681 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:23:04.680525   35681 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:23:04.702124   35681 out.go:204]   - Booting up control plane ...
	I0223 17:23:04.702203   35681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:23:04.702284   35681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:23:04.702349   35681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:23:04.702420   35681 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:23:04.702617   35681 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:23:44.690114   35681 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 17:23:44.691028   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:23:44.691229   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:23:49.692674   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:23:49.692905   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:23:59.693492   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:23:59.693636   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:24:19.694857   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:24:19.695040   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:24:59.696955   35681 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:24:59.697216   35681 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:24:59.697229   35681 kubeadm.go:322] 
	I0223 17:24:59.697285   35681 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 17:24:59.697397   35681 kubeadm.go:322] 	timed out waiting for the condition
	I0223 17:24:59.697417   35681 kubeadm.go:322] 
	I0223 17:24:59.697467   35681 kubeadm.go:322] This error is likely caused by:
	I0223 17:24:59.697548   35681 kubeadm.go:322] 	- The kubelet is not running
	I0223 17:24:59.697677   35681 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 17:24:59.697688   35681 kubeadm.go:322] 
	I0223 17:24:59.697798   35681 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 17:24:59.697836   35681 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 17:24:59.697868   35681 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 17:24:59.697873   35681 kubeadm.go:322] 
	I0223 17:24:59.698000   35681 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 17:24:59.698102   35681 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 17:24:59.698202   35681 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 17:24:59.698261   35681 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 17:24:59.698351   35681 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 17:24:59.698386   35681 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 17:24:59.701244   35681 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 17:24:59.701318   35681 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 17:24:59.701418   35681 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 17:24:59.701511   35681 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:24:59.701567   35681 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 17:24:59.701616   35681 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 17:24:59.701653   35681 kubeadm.go:403] StartCluster complete in 3m53.595639767s
	I0223 17:24:59.701764   35681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:24:59.720409   35681 logs.go:277] 0 containers: []
	W0223 17:24:59.720422   35681 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:24:59.720498   35681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:24:59.740221   35681 logs.go:277] 0 containers: []
	W0223 17:24:59.740233   35681 logs.go:279] No container was found matching "etcd"
	I0223 17:24:59.740305   35681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:24:59.759280   35681 logs.go:277] 0 containers: []
	W0223 17:24:59.759292   35681 logs.go:279] No container was found matching "coredns"
	I0223 17:24:59.759361   35681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:24:59.780228   35681 logs.go:277] 0 containers: []
	W0223 17:24:59.780241   35681 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:24:59.780311   35681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:24:59.799736   35681 logs.go:277] 0 containers: []
	W0223 17:24:59.799748   35681 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:24:59.799824   35681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:24:59.818598   35681 logs.go:277] 0 containers: []
	W0223 17:24:59.818611   35681 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:24:59.818684   35681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:24:59.839301   35681 logs.go:277] 0 containers: []
	W0223 17:24:59.839314   35681 logs.go:279] No container was found matching "kindnet"
	I0223 17:24:59.839323   35681 logs.go:123] Gathering logs for kubelet ...
	I0223 17:24:59.839331   35681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:24:59.880818   35681 logs.go:123] Gathering logs for dmesg ...
	I0223 17:24:59.880838   35681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:24:59.895718   35681 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:24:59.895738   35681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:24:59.956673   35681 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:24:59.956686   35681 logs.go:123] Gathering logs for Docker ...
	I0223 17:24:59.956692   35681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:24:59.982126   35681 logs.go:123] Gathering logs for container status ...
	I0223 17:24:59.982149   35681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:25:02.029626   35681 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047455795s)
	W0223 17:25:02.029752   35681 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 17:25:02.029771   35681 out.go:239] * 
	* 
	W0223 17:25:02.029915   35681 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 17:25:02.029940   35681 out.go:239] * 
	* 
	W0223 17:25:02.030586   35681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 17:25:02.096074   35681 out.go:177] 
	W0223 17:25:02.138366   35681 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 17:25:02.138505   35681 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 17:25:02.138603   35681 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 17:25:02.216965   35681 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-238000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-238000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-238000: (1.687464713s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-238000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-238000 status --format={{.Host}}: exit status 7 (105.931409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-238000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-238000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (4m34.670286982s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-238000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-238000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-238000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (642.562617ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-238000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-238000
	    minikube start -p kubernetes-upgrade-238000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2380002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-238000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-238000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 
E0223 17:29:44.815952   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-238000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (48.459525111s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-02-23 17:30:27.910741 -0800 PST m=+3003.261559937
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-238000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-238000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9685620d15e52bf5a1d0c29a4509f49b77c345ae1e7a33a57ddfbf1bd3a1eb97",
	        "Created": "2023-02-24T01:21:00.077777238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 584630,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:25:05.591003912Z",
	            "FinishedAt": "2023-02-24T01:25:02.770897251Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/9685620d15e52bf5a1d0c29a4509f49b77c345ae1e7a33a57ddfbf1bd3a1eb97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9685620d15e52bf5a1d0c29a4509f49b77c345ae1e7a33a57ddfbf1bd3a1eb97/hostname",
	        "HostsPath": "/var/lib/docker/containers/9685620d15e52bf5a1d0c29a4509f49b77c345ae1e7a33a57ddfbf1bd3a1eb97/hosts",
	        "LogPath": "/var/lib/docker/containers/9685620d15e52bf5a1d0c29a4509f49b77c345ae1e7a33a57ddfbf1bd3a1eb97/9685620d15e52bf5a1d0c29a4509f49b77c345ae1e7a33a57ddfbf1bd3a1eb97-json.log",
	        "Name": "/kubernetes-upgrade-238000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-238000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-238000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d49f6428737fba37c152b7cef910a455d3da82f56a0631242f6f477f934b22eb-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d49f6428737fba37c152b7cef910a455d3da82f56a0631242f6f477f934b22eb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d49f6428737fba37c152b7cef910a455d3da82f56a0631242f6f477f934b22eb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d49f6428737fba37c152b7cef910a455d3da82f56a0631242f6f477f934b22eb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-238000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-238000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-238000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-238000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-238000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c41784da6005c8205b994c42dc5c1b599a1565781a04367dbdb5991bca42317",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59808"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59809"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59811"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4c41784da600",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-238000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9685620d15e5",
	                        "kubernetes-upgrade-238000"
	                    ],
	                    "NetworkID": "e463dd3b9fbf868900220125adb0c7e08eefbe096a87572d0166961b13746a95",
	                    "EndpointID": "af109c217807052c025ec1bef02de3eb262a6029723846623497e9d7f0af9eb6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-238000 -n kubernetes-upgrade-238000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-238000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-238000 logs -n 25: (3.373468928s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-152000 pgrep             | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:29 PST | 23 Feb 23 17:29 PST |
	|         | -a kubelet                                 |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | cat /etc/nsswitch.conf                     |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | cat /etc/hosts                             |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | cat /etc/resolv.conf                       |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | crictl pods                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | crictl ps --all                            |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | find /etc/cni -type f -exec sh             |                       |         |         |                     |                     |
	|         | -c 'echo {}; cat {}' \;                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | ip a s                                     |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | ip r s                                     |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | iptables-save                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | iptables -t nat -L -n -v                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | cat /run/flannel/subnet.env                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000                   | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST |                     |
	|         | sudo cat                                   |                       |         |         |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json            |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | systemctl status kubelet --all             |                       |         |         |                     |                     |
	|         | --full --no-pager                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000                   | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | sudo systemctl cat kubelet                 |                       |         |         |                     |                     |
	|         | --no-pager                                 |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | journalctl -xeu kubelet --all              |                       |         |         |                     |                     |
	|         | --full --no-pager                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000                   | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | sudo cat                                   |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000                   | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | sudo cat                                   |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | systemctl status docker --all              |                       |         |         |                     |                     |
	|         | --full --no-pager                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000                   | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | sudo systemctl cat docker                  |                       |         |         |                     |                     |
	|         | --no-pager                                 |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | cat /etc/docker/daemon.json                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | docker system info                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo              | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | systemctl status cri-docker                |                       |         |         |                     |                     |
	|         | --all --full --no-pager                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000                   | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST | 23 Feb 23 17:30 PST |
	|         | sudo systemctl cat cri-docker              |                       |         |         |                     |                     |
	|         | --no-pager                                 |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-152000 sudo cat          | custom-flannel-152000 | jenkins | v1.29.0 | 23 Feb 23 17:30 PST |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                       |         |         |                     |                     |
	|---------|--------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 17:29:39
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 17:29:36.889842   38450 addons.go:492] enable addons completed in 1.207928227s: enabled=[default-storageclass storage-provisioner]
	I0223 17:29:36.903683   38450 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-152000" to be "Ready" ...
	I0223 17:29:36.909199   38450 node_ready.go:49] node "custom-flannel-152000" has status "Ready":"True"
	I0223 17:29:36.909212   38450 node_ready.go:38] duration metric: took 5.504339ms waiting for node "custom-flannel-152000" to be "Ready" ...
	I0223 17:29:36.909220   38450 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:29:36.917943   38450 pod_ready.go:78] waiting up to 15m0s for pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:38.960858   38450 pod_ready.go:102] pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace has status "Ready":"False"
	I0223 17:29:39.507159   38624 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:29:39.527902   38624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:29:39.527921   38624 out.go:309] Setting ErrFile to fd 2...
	I0223 17:29:39.527930   38624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:29:39.528185   38624 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:29:39.570983   38624 out.go:303] Setting JSON to false
	I0223 17:29:39.591895   38624 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8954,"bootTime":1677193225,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 17:29:39.591967   38624 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 17:29:39.613850   38624 out.go:177] * [kubernetes-upgrade-238000] minikube v1.29.0 on Darwin 13.2
	I0223 17:29:39.688325   38624 notify.go:220] Checking for updates...
	I0223 17:29:39.725812   38624 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 17:29:39.783939   38624 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:29:39.841902   38624 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 17:29:39.900924   38624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 17:29:39.942778   38624 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 17:29:40.016961   38624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 17:29:40.054160   38624 config.go:182] Loaded profile config "kubernetes-upgrade-238000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:29:40.054518   38624 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 17:29:40.125321   38624 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 17:29:40.125477   38624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:29:40.294421   38624 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:64 SystemTime:2023-02-24 01:29:40.183215565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:29:40.318870   38624 out.go:177] * Using the docker driver based on existing profile
	I0223 17:29:40.339964   38624 start.go:296] selected driver: docker
	I0223 17:29:40.339986   38624 start.go:857] validating driver "docker" against &{Name:kubernetes-upgrade-238000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-238000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:29:40.340070   38624 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 17:29:40.342656   38624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:29:40.521351   38624 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:63 SystemTime:2023-02-24 01:29:40.401252473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:29:40.521507   38624 cni.go:84] Creating CNI manager for ""
	I0223 17:29:40.521522   38624 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 17:29:40.521539   38624 start_flags.go:319] config:
	{Name:kubernetes-upgrade-238000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-238000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP:}
	I0223 17:29:40.580006   38624 out.go:177] * Starting control plane node kubernetes-upgrade-238000 in cluster kubernetes-upgrade-238000
	I0223 17:29:40.601024   38624 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:29:40.638109   38624 out.go:177] * Pulling base image ...
	I0223 17:29:40.659120   38624 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:29:40.659120   38624 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:29:40.659176   38624 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 17:29:40.659187   38624 cache.go:57] Caching tarball of preloaded images
	I0223 17:29:40.659279   38624 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:29:40.659289   38624 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 17:29:40.659720   38624 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/config.json ...
	I0223 17:29:40.719500   38624 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:29:40.719522   38624 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:29:40.719568   38624 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:29:40.719699   38624 start.go:364] acquiring machines lock for kubernetes-upgrade-238000: {Name:mk2441e5d722fc72d266c863f46cd5fa5ce6ba49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:29:40.719818   38624 start.go:368] acquired machines lock for "kubernetes-upgrade-238000" in 96.193µs
	I0223 17:29:40.719854   38624 start.go:96] Skipping create...Using existing machine configuration
	I0223 17:29:40.719866   38624 fix.go:55] fixHost starting: 
	I0223 17:29:40.720158   38624 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-238000 --format={{.State.Status}}
	I0223 17:29:40.784762   38624 fix.go:103] recreateIfNeeded on kubernetes-upgrade-238000: state=Running err=<nil>
	W0223 17:29:40.784792   38624 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 17:29:40.843356   38624 out.go:177] * Updating the running docker "kubernetes-upgrade-238000" container ...
	I0223 17:29:40.864455   38624 machine.go:88] provisioning docker machine ...
	I0223 17:29:40.864487   38624 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-238000"
	I0223 17:29:40.864579   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:40.929131   38624 main.go:141] libmachine: Using SSH client type: native
	I0223 17:29:40.929614   38624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59807 <nil> <nil>}
	I0223 17:29:40.929636   38624 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-238000 && echo "kubernetes-upgrade-238000" | sudo tee /etc/hostname
	I0223 17:29:41.072733   38624 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-238000
	
	I0223 17:29:41.072835   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:41.139992   38624 main.go:141] libmachine: Using SSH client type: native
	I0223 17:29:41.140343   38624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59807 <nil> <nil>}
	I0223 17:29:41.140357   38624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-238000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-238000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-238000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:29:41.273975   38624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:29:41.274002   38624 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:29:41.274023   38624 ubuntu.go:177] setting up certificates
	I0223 17:29:41.274035   38624 provision.go:83] configureAuth start
	I0223 17:29:41.274128   38624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-238000
	I0223 17:29:41.340593   38624 provision.go:138] copyHostCerts
	I0223 17:29:41.340689   38624 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:29:41.340699   38624 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:29:41.340808   38624 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:29:41.341023   38624 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:29:41.341029   38624 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:29:41.341088   38624 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:29:41.341241   38624 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:29:41.341247   38624 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:29:41.341303   38624 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:29:41.341425   38624 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-238000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-238000]
	I0223 17:29:41.502958   38624 provision.go:172] copyRemoteCerts
	I0223 17:29:41.503025   38624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:29:41.503075   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:41.561352   38624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59807 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:29:41.657024   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:29:41.675049   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0223 17:29:41.693100   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 17:29:41.710583   38624 provision.go:86] duration metric: configureAuth took 436.525056ms
	I0223 17:29:41.710600   38624 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:29:41.710738   38624 config.go:182] Loaded profile config "kubernetes-upgrade-238000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:29:41.710805   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:41.768839   38624 main.go:141] libmachine: Using SSH client type: native
	I0223 17:29:41.769183   38624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59807 <nil> <nil>}
	I0223 17:29:41.769194   38624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:29:41.901904   38624 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:29:41.901919   38624 ubuntu.go:71] root file system type: overlay
	I0223 17:29:41.902013   38624 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:29:41.902100   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:41.963167   38624 main.go:141] libmachine: Using SSH client type: native
	I0223 17:29:41.963511   38624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59807 <nil> <nil>}
	I0223 17:29:41.963575   38624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:29:42.108910   38624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:29:42.109012   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:42.173578   38624 main.go:141] libmachine: Using SSH client type: native
	I0223 17:29:42.173936   38624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59807 <nil> <nil>}
	I0223 17:29:42.173955   38624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:29:42.321747   38624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:29:42.321768   38624 machine.go:91] provisioned docker machine in 1.457294093s
	I0223 17:29:42.321782   38624 start.go:300] post-start starting for "kubernetes-upgrade-238000" (driver="docker")
	I0223 17:29:42.321803   38624 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:29:42.321899   38624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:29:42.322021   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:42.385472   38624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59807 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:29:42.484126   38624 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:29:42.488579   38624 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:29:42.488599   38624 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:29:42.488613   38624 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:29:42.488620   38624 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:29:42.488629   38624 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:29:42.488737   38624 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:29:42.488897   38624 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:29:42.489089   38624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:29:42.499334   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:29:42.524673   38624 start.go:303] post-start completed in 202.861733ms
	I0223 17:29:42.524781   38624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:29:42.524868   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:42.588058   38624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59807 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:29:42.682559   38624 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:29:42.690101   38624 fix.go:57] fixHost completed within 1.970220915s
	I0223 17:29:42.690139   38624 start.go:83] releasing machines lock for "kubernetes-upgrade-238000", held for 1.970301573s
	I0223 17:29:42.690284   38624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-238000
	I0223 17:29:42.759870   38624 ssh_runner.go:195] Run: cat /version.json
	I0223 17:29:42.759887   38624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 17:29:42.759946   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:42.759965   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:42.838278   38624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59807 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:29:42.839155   38624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59807 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:29:42.986226   38624 ssh_runner.go:195] Run: systemctl --version
	I0223 17:29:42.993239   38624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0223 17:29:42.999601   38624 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 17:29:42.999672   38624 ssh_runner.go:195] Run: which cri-dockerd
	I0223 17:29:43.004424   38624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 17:29:43.014371   38624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 17:29:43.033097   38624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 17:29:43.042341   38624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 17:29:43.051159   38624 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0223 17:29:43.051177   38624 start.go:485] detecting cgroup driver to use...
	I0223 17:29:43.051189   38624 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:29:43.051278   38624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:29:43.064829   38624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 17:29:43.074182   38624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:29:43.083877   38624 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:29:43.083957   38624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:29:43.096503   38624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:29:43.109678   38624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:29:43.122680   38624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:29:43.133698   38624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:29:43.145247   38624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:29:43.155926   38624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:29:43.164943   38624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:29:43.173315   38624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:29:43.257657   38624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:29:41.436172   38450 pod_ready.go:102] pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace has status "Ready":"False"
	I0223 17:29:43.437176   38450 pod_ready.go:102] pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace has status "Ready":"False"
	I0223 17:29:45.522100   38624 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (2.264409563s)
	I0223 17:29:45.522127   38624 start.go:485] detecting cgroup driver to use...
	I0223 17:29:45.522143   38624 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:29:45.522226   38624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:29:45.543751   38624 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:29:45.543845   38624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:29:45.578835   38624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:29:45.602239   38624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:29:45.705179   38624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:29:45.808744   38624 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:29:45.808765   38624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:29:45.835183   38624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:29:45.949009   38624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:29:46.734220   38624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:29:46.805692   38624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 17:29:46.885054   38624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:29:46.958905   38624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:29:47.030262   38624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 17:29:47.053722   38624 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 17:29:47.053829   38624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 17:29:47.058753   38624 start.go:553] Will wait 60s for crictl version
	I0223 17:29:47.058814   38624 ssh_runner.go:195] Run: which crictl
	I0223 17:29:47.063157   38624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 17:29:47.150914   38624 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 17:29:47.151008   38624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:29:47.187209   38624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:29:47.277845   38624 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 17:29:47.278027   38624 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-238000 dig +short host.docker.internal
	I0223 17:29:47.429559   38624 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:29:47.429739   38624 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:29:47.437509   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:47.510714   38624 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:29:47.510849   38624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:29:47.540985   38624 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 17:29:47.541022   38624 docker.go:560] Images already preloaded, skipping extraction
	I0223 17:29:47.541160   38624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:29:47.609330   38624 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 17:29:47.609361   38624 cache_images.go:84] Images are preloaded, skipping loading
	I0223 17:29:47.609487   38624 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:29:47.678490   38624 cni.go:84] Creating CNI manager for ""
	I0223 17:29:47.678514   38624 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 17:29:47.678539   38624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:29:47.678557   38624 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-238000 NodeName:kubernetes-upgrade-238000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:29:47.678698   38624 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-238000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:29:47.678783   38624 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-238000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-238000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 17:29:47.678860   38624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 17:29:47.699050   38624 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:29:47.699153   38624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 17:29:47.714389   38624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (457 bytes)
	I0223 17:29:47.734015   38624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:29:47.753804   38624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0223 17:29:47.790449   38624 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:29:47.800592   38624 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000 for IP: 192.168.76.2
	I0223 17:29:47.800614   38624 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:29:47.800801   38624 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:29:47.800858   38624 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:29:47.800963   38624 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.key
	I0223 17:29:47.801043   38624 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.key.31bdca25
	I0223 17:29:47.801115   38624 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.key
	I0223 17:29:47.801341   38624 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:29:47.801380   38624 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:29:47.801391   38624 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:29:47.801431   38624 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:29:47.801470   38624 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:29:47.801511   38624 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:29:47.801587   38624 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:29:47.802188   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 17:29:47.832756   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 17:29:47.883276   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 17:29:47.909528   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 17:29:47.944510   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:29:48.000817   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:29:48.031804   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:29:48.095140   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:29:48.129290   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:29:48.176130   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:29:48.201694   38624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:29:48.230766   38624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 17:29:48.251310   38624 ssh_runner.go:195] Run: openssl version
	I0223 17:29:48.257446   38624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:29:48.267084   38624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:29:48.271780   38624 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:29:48.271840   38624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:29:48.277969   38624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:29:48.289885   38624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:29:48.300904   38624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:29:48.307035   38624 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:29:48.307134   38624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:29:48.316729   38624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:29:48.330995   38624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:29:48.351305   38624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:29:48.357812   38624 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:29:48.357899   38624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:29:48.381445   38624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:29:48.394957   38624 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-238000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-238000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:29:48.395101   38624 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:29:48.438526   38624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 17:29:48.483839   38624 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 17:29:48.483861   38624 kubeadm.go:633] restartCluster start
	I0223 17:29:48.483924   38624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 17:29:48.494024   38624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:29:48.494137   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:29:48.563137   38624 kubeconfig.go:92] found "kubernetes-upgrade-238000" server: "https://127.0.0.1:59811"
	I0223 17:29:48.564047   38624 kapi.go:59] client config for kubernetes-upgrade-238000: &rest.Config{Host:"https://127.0.0.1:59811", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:29:48.564923   38624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 17:29:48.573587   38624 api_server.go:165] Checking apiserver status ...
	I0223 17:29:48.573648   38624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:29:48.592477   38624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/11718/cgroup
	W0223 17:29:48.608008   38624 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/11718/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:29:48.608097   38624 ssh_runner.go:195] Run: ls
	I0223 17:29:48.614008   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:29:45.438868   38450 pod_ready.go:102] pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace has status "Ready":"False"
	I0223 17:29:47.440915   38450 pod_ready.go:102] pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace has status "Ready":"False"
	I0223 17:29:49.480099   38450 pod_ready.go:102] pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace has status "Ready":"False"
	I0223 17:29:50.432377   38450 pod_ready.go:97] error getting pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-ggnc8" not found
	I0223 17:29:50.432395   38450 pod_ready.go:81] duration metric: took 13.514371611s waiting for pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace to be "Ready" ...
	E0223 17:29:50.432434   38450 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-ggnc8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-ggnc8" not found
	I0223 17:29:50.432443   38450 pod_ready.go:78] waiting up to 15m0s for pod "coredns-787d4945fb-xskqk" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.444165   38450 pod_ready.go:92] pod "coredns-787d4945fb-xskqk" in "kube-system" namespace has status "Ready":"True"
	I0223 17:29:51.444179   38450 pod_ready.go:81] duration metric: took 1.011724904s waiting for pod "coredns-787d4945fb-xskqk" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.444187   38450 pod_ready.go:78] waiting up to 15m0s for pod "etcd-custom-flannel-152000" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.449423   38450 pod_ready.go:92] pod "etcd-custom-flannel-152000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:29:51.449433   38450 pod_ready.go:81] duration metric: took 5.229689ms waiting for pod "etcd-custom-flannel-152000" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.449440   38450 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-152000" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.454971   38450 pod_ready.go:92] pod "kube-apiserver-custom-flannel-152000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:29:51.454982   38450 pod_ready.go:81] duration metric: took 5.537518ms waiting for pod "kube-apiserver-custom-flannel-152000" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.454988   38450 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-152000" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.476464   38450 pod_ready.go:92] pod "kube-controller-manager-custom-flannel-152000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:29:51.476480   38450 pod_ready.go:81] duration metric: took 21.485768ms waiting for pod "kube-controller-manager-custom-flannel-152000" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.476492   38450 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-zqt2x" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.482829   38450 pod_ready.go:92] pod "kube-proxy-zqt2x" in "kube-system" namespace has status "Ready":"True"
	I0223 17:29:51.482842   38450 pod_ready.go:81] duration metric: took 6.331255ms waiting for pod "kube-proxy-zqt2x" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.482850   38450 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-152000" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.843786   38450 pod_ready.go:92] pod "kube-scheduler-custom-flannel-152000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:29:51.843800   38450 pod_ready.go:81] duration metric: took 360.942814ms waiting for pod "kube-scheduler-custom-flannel-152000" in "kube-system" namespace to be "Ready" ...
	I0223 17:29:51.843809   38450 pod_ready.go:38] duration metric: took 14.934514873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:29:51.843829   38450 api_server.go:51] waiting for apiserver process to appear ...
	I0223 17:29:51.843902   38450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:29:51.854281   38450 api_server.go:71] duration metric: took 15.644274877s to wait for apiserver process to appear ...
	I0223 17:29:51.854293   38450 api_server.go:87] waiting for apiserver healthz status ...
	I0223 17:29:51.854311   38450 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60374/healthz ...
	I0223 17:29:51.860361   38450 api_server.go:278] https://127.0.0.1:60374/healthz returned 200:
	ok
	I0223 17:29:51.861715   38450 api_server.go:140] control plane version: v1.26.1
	I0223 17:29:51.861724   38450 api_server.go:130] duration metric: took 7.427148ms to wait for apiserver health ...
	I0223 17:29:51.861729   38450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 17:29:52.045911   38450 system_pods.go:59] 7 kube-system pods found
	I0223 17:29:52.045929   38450 system_pods.go:61] "coredns-787d4945fb-xskqk" [c8814315-8a78-4b62-9b82-9bbc2f68cf54] Running
	I0223 17:29:52.045933   38450 system_pods.go:61] "etcd-custom-flannel-152000" [de5f3b49-a456-4072-855a-e96386e7ecfa] Running
	I0223 17:29:52.045945   38450 system_pods.go:61] "kube-apiserver-custom-flannel-152000" [f61a2db4-7de3-40ff-8e44-7936a560081c] Running
	I0223 17:29:52.045950   38450 system_pods.go:61] "kube-controller-manager-custom-flannel-152000" [51fc59ee-f788-4470-8131-699d27ef6ed3] Running
	I0223 17:29:52.045955   38450 system_pods.go:61] "kube-proxy-zqt2x" [ccb8322b-4d54-4760-bf84-90e33de675e6] Running
	I0223 17:29:52.045959   38450 system_pods.go:61] "kube-scheduler-custom-flannel-152000" [1ff35e80-eb0b-4231-be6a-cb391188464a] Running
	I0223 17:29:52.045969   38450 system_pods.go:61] "storage-provisioner" [3887aefa-a2b0-4a33-ba7a-8d00e28b449d] Running
	I0223 17:29:52.045973   38450 system_pods.go:74] duration metric: took 184.239208ms to wait for pod list to return data ...
	I0223 17:29:52.045981   38450 default_sa.go:34] waiting for default service account to be created ...
	I0223 17:29:52.242575   38450 default_sa.go:45] found service account: "default"
	I0223 17:29:52.242585   38450 default_sa.go:55] duration metric: took 196.599167ms for default service account to be created ...
	I0223 17:29:52.242591   38450 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 17:29:52.445824   38450 system_pods.go:86] 7 kube-system pods found
	I0223 17:29:52.445839   38450 system_pods.go:89] "coredns-787d4945fb-xskqk" [c8814315-8a78-4b62-9b82-9bbc2f68cf54] Running
	I0223 17:29:52.445844   38450 system_pods.go:89] "etcd-custom-flannel-152000" [de5f3b49-a456-4072-855a-e96386e7ecfa] Running
	I0223 17:29:52.445848   38450 system_pods.go:89] "kube-apiserver-custom-flannel-152000" [f61a2db4-7de3-40ff-8e44-7936a560081c] Running
	I0223 17:29:52.445853   38450 system_pods.go:89] "kube-controller-manager-custom-flannel-152000" [51fc59ee-f788-4470-8131-699d27ef6ed3] Running
	I0223 17:29:52.445860   38450 system_pods.go:89] "kube-proxy-zqt2x" [ccb8322b-4d54-4760-bf84-90e33de675e6] Running
	I0223 17:29:52.445879   38450 system_pods.go:89] "kube-scheduler-custom-flannel-152000" [1ff35e80-eb0b-4231-be6a-cb391188464a] Running
	I0223 17:29:52.445886   38450 system_pods.go:89] "storage-provisioner" [3887aefa-a2b0-4a33-ba7a-8d00e28b449d] Running
	I0223 17:29:52.445891   38450 system_pods.go:126] duration metric: took 203.296246ms to wait for k8s-apps to be running ...
	I0223 17:29:52.445897   38450 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 17:29:52.445963   38450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:29:52.456144   38450 system_svc.go:56] duration metric: took 10.242974ms WaitForService to wait for kubelet.
	I0223 17:29:52.456156   38450 kubeadm.go:578] duration metric: took 16.246150572s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 17:29:52.456170   38450 node_conditions.go:102] verifying NodePressure condition ...
	I0223 17:29:52.644832   38450 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:29:52.644849   38450 node_conditions.go:123] node cpu capacity is 6
	I0223 17:29:52.644859   38450 node_conditions.go:105] duration metric: took 188.685156ms to run NodePressure ...
	I0223 17:29:52.644867   38450 start.go:228] waiting for startup goroutines ...
	I0223 17:29:52.644871   38450 start.go:233] waiting for cluster config update ...
	I0223 17:29:52.644883   38450 start.go:242] writing updated cluster config ...
	I0223 17:29:52.645194   38450 ssh_runner.go:195] Run: rm -f paused
	I0223 17:29:52.684328   38450 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 17:29:52.706888   38450 out.go:177] * Done! kubectl is now configured to use "custom-flannel-152000" cluster and "default" namespace by default
	I0223 17:29:53.614340   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0223 17:29:53.614418   38624 retry.go:31] will retry after 262.591584ms: state is "Stopped"
	I0223 17:29:53.877156   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:29:58.878500   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0223 17:29:58.878535   38624 retry.go:31] will retry after 277.316258ms: state is "Stopped"
	I0223 17:29:59.156084   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:04.157853   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0223 17:30:04.658633   38624 api_server.go:165] Checking apiserver status ...
	I0223 17:30:04.658799   38624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:30:04.670672   38624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/11718/cgroup
	W0223 17:30:04.678589   38624 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/11718/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:30:04.678645   38624 ssh_runner.go:195] Run: ls
	I0223 17:30:04.682605   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:08.141447   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:08.141473   38624 retry.go:31] will retry after 250.475737ms: state is "Stopped"
	I0223 17:30:08.392293   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:08.393790   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:08.393811   38624 retry.go:31] will retry after 345.705972ms: state is "Stopped"
	I0223 17:30:08.739616   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:08.741427   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:08.741450   38624 retry.go:31] will retry after 345.786874ms: state is "Stopped"
	I0223 17:30:09.088958   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:09.091460   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:09.091488   38624 retry.go:31] will retry after 580.993503ms: state is "Stopped"
	I0223 17:30:09.672803   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:09.675374   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:09.675398   38624 retry.go:31] will retry after 661.373579ms: state is "Stopped"
	I0223 17:30:10.336930   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:10.339186   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:10.339215   38624 retry.go:31] will retry after 753.135642ms: state is "Stopped"
	I0223 17:30:11.093555   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:11.094926   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:11.094950   38624 retry.go:31] will retry after 737.5135ms: state is "Stopped"
	I0223 17:30:11.832559   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:11.834998   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:11.835022   38624 retry.go:31] will retry after 1.114167611s: state is "Stopped"
	I0223 17:30:12.950127   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:12.951612   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:12.951633   38624 retry.go:31] will retry after 1.453543196s: state is "Stopped"
	I0223 17:30:14.406078   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:14.408803   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:14.408841   38624 retry.go:31] will retry after 2.253264121s: state is "Stopped"
	I0223 17:30:16.664154   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:16.665733   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:16.665754   38624 retry.go:31] will retry after 2.715067491s: state is "Stopped"
	I0223 17:30:19.382073   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:19.383704   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:19.383727   38624 api_server.go:165] Checking apiserver status ...
	I0223 17:30:19.383774   38624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:30:19.393817   38624 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:30:19.393832   38624 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 17:30:19.393839   38624 kubeadm.go:1120] stopping kube-system containers ...
	I0223 17:30:19.393911   38624 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:30:19.417654   38624 docker.go:456] Stopping containers: [39221d210e7b 517e485f9bf9 e712fda4ced6 91dc140ff8f3 255d1f3ca1d3 e0929db2c0f7 ef24b2815d61 5bb2d57ada2e 8afeb8eb4125 4c1692ce729b a9ec619d6ab8 0a0d3748d002 514ed4ae00ee 41780765f2b2 481a5f4bcc2d 2087482b4e90 8cd42637ea1f]
	I0223 17:30:19.417748   38624 ssh_runner.go:195] Run: docker stop 39221d210e7b 517e485f9bf9 e712fda4ced6 91dc140ff8f3 255d1f3ca1d3 e0929db2c0f7 ef24b2815d61 5bb2d57ada2e 8afeb8eb4125 4c1692ce729b a9ec619d6ab8 0a0d3748d002 514ed4ae00ee 41780765f2b2 481a5f4bcc2d 2087482b4e90 8cd42637ea1f
	I0223 17:30:19.657247   38624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 17:30:19.698553   38624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:30:19.707647   38624 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 24 01:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 24 01:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Feb 24 01:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 24 01:29 /etc/kubernetes/scheduler.conf
	
	I0223 17:30:19.707707   38624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 17:30:19.716607   38624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 17:30:19.725562   38624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 17:30:19.734058   38624 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:30:19.734130   38624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0223 17:30:19.743007   38624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 17:30:19.751656   38624 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:30:19.751723   38624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0223 17:30:19.759769   38624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 17:30:19.769288   38624 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 17:30:19.769307   38624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:30:19.827734   38624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:30:20.519088   38624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:30:20.661130   38624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:30:20.734383   38624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:30:20.881591   38624 api_server.go:51] waiting for apiserver process to appear ...
	I0223 17:30:20.881680   38624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:30:21.394663   38624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:30:21.894732   38624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:30:21.910792   38624 api_server.go:71] duration metric: took 1.029202122s to wait for apiserver process to appear ...
	I0223 17:30:21.910810   38624 api_server.go:87] waiting for apiserver healthz status ...
	I0223 17:30:21.910834   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:21.911972   38624 api_server.go:268] stopped: https://127.0.0.1:59811/healthz: Get "https://127.0.0.1:59811/healthz": EOF
	I0223 17:30:22.412184   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:24.759406   38624 api_server.go:278] https://127.0.0.1:59811/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 17:30:24.759426   38624 api_server.go:102] status: https://127.0.0.1:59811/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 17:30:24.912203   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:24.917646   38624 api_server.go:278] https://127.0.0.1:59811/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 17:30:24.917662   38624 api_server.go:102] status: https://127.0.0.1:59811/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 17:30:25.412087   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:25.417039   38624 api_server.go:278] https://127.0.0.1:59811/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 17:30:25.417053   38624 api_server.go:102] status: https://127.0.0.1:59811/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 17:30:25.912199   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:25.917647   38624 api_server.go:278] https://127.0.0.1:59811/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 17:30:25.917674   38624 api_server.go:102] status: https://127.0.0.1:59811/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 17:30:26.412179   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:26.417439   38624 api_server.go:278] https://127.0.0.1:59811/healthz returned 200:
	ok
	I0223 17:30:26.424563   38624 api_server.go:140] control plane version: v1.26.1
	I0223 17:30:26.424575   38624 api_server.go:130] duration metric: took 4.513737124s to wait for apiserver health ...
	I0223 17:30:26.424581   38624 cni.go:84] Creating CNI manager for ""
	I0223 17:30:26.424590   38624 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 17:30:26.445291   38624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 17:30:26.466257   38624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 17:30:26.476346   38624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 17:30:26.491506   38624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 17:30:26.498139   38624 system_pods.go:59] 5 kube-system pods found
	I0223 17:30:26.498157   38624 system_pods.go:61] "etcd-kubernetes-upgrade-238000" [488bce95-201a-42c6-92b5-277193a2c0ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 17:30:26.498166   38624 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-238000" [d481582f-7976-4805-b693-abe5516db50c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0223 17:30:26.498176   38624 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-238000" [8ca43494-ce5c-48a9-92f2-88cac1484ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 17:30:26.498182   38624 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-238000" [dde47e3f-0dd4-43a1-9032-e6a84aa38e5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 17:30:26.498186   38624 system_pods.go:61] "storage-provisioner" [55297d01-0c33-4112-9faf-b957e61f70a4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0223 17:30:26.498191   38624 system_pods.go:74] duration metric: took 6.67326ms to wait for pod list to return data ...
	I0223 17:30:26.498198   38624 node_conditions.go:102] verifying NodePressure condition ...
	I0223 17:30:26.502234   38624 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:30:26.502252   38624 node_conditions.go:123] node cpu capacity is 6
	I0223 17:30:26.502261   38624 node_conditions.go:105] duration metric: took 4.059993ms to run NodePressure ...
	I0223 17:30:26.502275   38624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:30:26.652054   38624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 17:30:26.660463   38624 ops.go:34] apiserver oom_adj: -16
	I0223 17:30:26.660477   38624 kubeadm.go:637] restartCluster took 38.176441607s
	I0223 17:30:26.660482   38624 kubeadm.go:403] StartCluster complete in 38.265366539s
	I0223 17:30:26.660496   38624 settings.go:142] acquiring lock: {Name:mk850986f273a9d917e0b12c78b43b3396ccf03c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:30:26.660569   38624 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:30:26.661345   38624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/kubeconfig: {Name:mk7d15723b32e59bb8ea0777461e49fb0d77cb39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:30:26.661614   38624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 17:30:26.661645   38624 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 17:30:26.661704   38624 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-238000"
	I0223 17:30:26.661716   38624 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-238000"
	W0223 17:30:26.661722   38624 addons.go:236] addon storage-provisioner should already be in state true
	I0223 17:30:26.661721   38624 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-238000"
	I0223 17:30:26.661745   38624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-238000"
	I0223 17:30:26.661767   38624 config.go:182] Loaded profile config "kubernetes-upgrade-238000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:30:26.661769   38624 host.go:66] Checking if "kubernetes-upgrade-238000" exists ...
	I0223 17:30:26.662109   38624 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-238000 --format={{.State.Status}}
	I0223 17:30:26.662122   38624 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-238000 --format={{.State.Status}}
	I0223 17:30:26.662094   38624 kapi.go:59] client config for kubernetes-upgrade-238000: &rest.Config{Host:"https://127.0.0.1:59811", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:30:26.669492   38624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-238000" context rescaled to 1 replicas
	I0223 17:30:26.669532   38624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 17:30:26.691079   38624 out.go:177] * Verifying Kubernetes components...
	I0223 17:30:26.748799   38624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:30:26.756005   38624 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0223 17:30:26.761970   38624 kapi.go:59] client config for kubernetes-upgrade-238000: &rest.Config{Host:"https://127.0.0.1:59811", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubernetes-upgrade-238000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 17:30:26.763523   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:30:26.781703   38624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 17:30:26.789566   38624 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-238000"
	W0223 17:30:26.802754   38624 addons.go:236] addon default-storageclass should already be in state true
	I0223 17:30:26.802824   38624 host.go:66] Checking if "kubernetes-upgrade-238000" exists ...
	I0223 17:30:26.802846   38624 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 17:30:26.802869   38624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 17:30:26.802967   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:30:26.803942   38624 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-238000 --format={{.State.Status}}
	I0223 17:30:26.864862   38624 api_server.go:51] waiting for apiserver process to appear ...
	I0223 17:30:26.864963   38624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:30:26.884117   38624 api_server.go:71] duration metric: took 214.546521ms to wait for apiserver process to appear ...
	I0223 17:30:26.884140   38624 api_server.go:87] waiting for apiserver healthz status ...
	I0223 17:30:26.884154   38624 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59811/healthz ...
	I0223 17:30:26.889341   38624 api_server.go:278] https://127.0.0.1:59811/healthz returned 200:
	ok
	I0223 17:30:26.890819   38624 api_server.go:140] control plane version: v1.26.1
	I0223 17:30:26.890829   38624 api_server.go:130] duration metric: took 6.683093ms to wait for apiserver health ...
	I0223 17:30:26.890835   38624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 17:30:26.895138   38624 system_pods.go:59] 5 kube-system pods found
	I0223 17:30:26.895163   38624 system_pods.go:61] "etcd-kubernetes-upgrade-238000" [488bce95-201a-42c6-92b5-277193a2c0ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 17:30:26.895177   38624 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-238000" [d481582f-7976-4805-b693-abe5516db50c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0223 17:30:26.895185   38624 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-238000" [8ca43494-ce5c-48a9-92f2-88cac1484ee4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 17:30:26.895191   38624 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-238000" [dde47e3f-0dd4-43a1-9032-e6a84aa38e5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 17:30:26.895196   38624 system_pods.go:61] "storage-provisioner" [55297d01-0c33-4112-9faf-b957e61f70a4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0223 17:30:26.895207   38624 system_pods.go:74] duration metric: took 4.368497ms to wait for pod list to return data ...
	I0223 17:30:26.895214   38624 kubeadm.go:578] duration metric: took 225.655139ms to wait for : map[apiserver:true system_pods:true] ...
	I0223 17:30:26.895222   38624 node_conditions.go:102] verifying NodePressure condition ...
	I0223 17:30:26.897910   38624 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:30:26.897923   38624 node_conditions.go:123] node cpu capacity is 6
	I0223 17:30:26.897935   38624 node_conditions.go:105] duration metric: took 2.709293ms to run NodePressure ...
	I0223 17:30:26.897943   38624 start.go:228] waiting for startup goroutines ...
	I0223 17:30:26.931140   38624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59807 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:30:26.931391   38624 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 17:30:26.931400   38624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 17:30:26.931464   38624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-238000
	I0223 17:30:27.000632   38624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59807 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/kubernetes-upgrade-238000/id_rsa Username:docker}
	I0223 17:30:27.037037   38624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 17:30:27.109344   38624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 17:30:27.761298   38624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0223 17:30:27.790522   38624 addons.go:492] enable addons completed in 1.128874044s: enabled=[storage-provisioner default-storageclass]
	I0223 17:30:27.790551   38624 start.go:233] waiting for cluster config update ...
	I0223 17:30:27.790564   38624 start.go:242] writing updated cluster config ...
	I0223 17:30:27.790888   38624 ssh_runner.go:195] Run: rm -f paused
	I0223 17:30:27.831838   38624 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 17:30:27.852898   38624 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-238000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 01:25:05 UTC, end at Fri 2023-02-24 01:30:29 UTC. --
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.521342859Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.521368870Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.521384046Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.521408014Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.521572684Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.521665982Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.521678746Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.522076032Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.537409103Z" level=info msg="Loading containers: start."
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.654098447Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.691462150Z" level=info msg="Loading containers: done."
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.704360804Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.704437963Z" level=info msg="Daemon has completed initialization"
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.731474797Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 01:29:46 kubernetes-upgrade-238000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.740863678Z" level=info msg="API listen on [::]:2376"
	Feb 24 01:29:46 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:29:46.744005742Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 24 01:30:08 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:30:08.152628655Z" level=info msg="ignoring event" container=91dc140ff8f318160554d78b28afc28818fe2eaff04e17beadae6a36b6abfb32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:30:09 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:30:09.163823322Z" level=info msg="ignoring event" container=e712fda4ced67515d2e7d07c4901f2b32478bc96acef869a0daf307d7ab7c774 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:30:19 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:30:19.479535495Z" level=info msg="ignoring event" container=5bb2d57ada2e43e22635e8ab7dd1ae6b01e03c64c46a05fe9f0a8b3bcfc79c06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:30:19 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:30:19.480289039Z" level=info msg="ignoring event" container=e0929db2c0f79b2dccedde47ef6fe20cead0ff2262d2124230ce0dcc46dca4d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:30:19 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:30:19.480307876Z" level=info msg="ignoring event" container=ef24b2815d61d4e4f50139d9c5c531c250a503b360c45c788f771e3fdc083967 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:30:19 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:30:19.484132258Z" level=info msg="ignoring event" container=39221d210e7b95fdf85ecd67ffb657b1bf5dd6f45257a3b2a4eeda4b05d3facb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:30:19 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:30:19.491617477Z" level=info msg="ignoring event" container=255d1f3ca1d31eaf16d3f4bd60064c9542b08f578245f085cb052a90782734cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 01:30:19 kubernetes-upgrade-238000 dockerd[11109]: time="2023-02-24T01:30:19.492274767Z" level=info msg="ignoring event" container=517e485f9bf9ecf56405b5784175b1fb8cd1659de65468f6b24c728c0e5facba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	59f4bd533c767       655493523f607       7 seconds ago       Running             kube-scheduler            3                   b3d33c606be38
	77b1234252042       e9c08e11b07f6       8 seconds ago       Running             kube-controller-manager   2                   6b9b4e20e5571
	9813c08201ff1       deb04688c4a35       8 seconds ago       Running             kube-apiserver            2                   5c69bb7a2cdbb
	1363d473b8a61       fce326961ae2d       8 seconds ago       Running             etcd                      3                   6fc0138821c7e
	39221d210e7b9       fce326961ae2d       22 seconds ago      Exited              etcd                      2                   e0929db2c0f79
	517e485f9bf9e       655493523f607       23 seconds ago      Exited              kube-scheduler            2                   255d1f3ca1d31
	e712fda4ced67       e9c08e11b07f6       42 seconds ago      Exited              kube-controller-manager   1                   5bb2d57ada2e4
	91dc140ff8f31       deb04688c4a35       42 seconds ago      Exited              kube-apiserver            1                   ef24b2815d61d
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-238000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-238000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510
	                    minikube.k8s.io/name=kubernetes-upgrade-238000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T17_29_37_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:29:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-238000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:30:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:30:24 +0000   Fri, 24 Feb 2023 01:29:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:30:24 +0000   Fri, 24 Feb 2023 01:29:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:30:24 +0000   Fri, 24 Feb 2023 01:29:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:30:24 +0000   Fri, 24 Feb 2023 01:29:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-238000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    57e18f70-d77e-4b45-ae15-597714d7865f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-238000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         52s
	  kube-system                 kube-apiserver-kubernetes-upgrade-238000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-238000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-scheduler-kubernetes-upgrade-238000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 52s              kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  52s              kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  52s              kubelet  Node kubernetes-upgrade-238000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s              kubelet  Node kubernetes-upgrade-238000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s              kubelet  Node kubernetes-upgrade-238000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                50s              kubelet  Node kubernetes-upgrade-238000 status is now: NodeReady
	  Normal  Starting                 9s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  9s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8s (x7 over 9s)  kubelet  Node kubernetes-upgrade-238000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x7 over 9s)  kubelet  Node kubernetes-upgrade-238000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 9s)  kubelet  Node kubernetes-upgrade-238000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.000095] FS-Cache: O-key=[8] 'c95bc40400000000'
	[  +0.000057] FS-Cache: N-cookie c=0000001c [p=00000014 fl=2 nc=0 na=1]
	[  +0.000081] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000007e1c140
	[  +0.000047] FS-Cache: N-key=[8] 'c95bc40400000000'
	[  +0.003060] FS-Cache: Duplicate cookie detected
	[  +0.000048] FS-Cache: O-cookie c=00000016 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000069] FS-Cache: O-cookie d=000000008a09309a{9p.inode} n=00000000934117af
	[  +0.000038] FS-Cache: O-key=[8] 'c95bc40400000000'
	[  +0.000044] FS-Cache: N-cookie c=0000001d [p=00000014 fl=2 nc=0 na=1]
	[  +0.000058] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000099c779f3
	[  +0.000182] FS-Cache: N-key=[8] 'c95bc40400000000'
	[  +3.488321] FS-Cache: Duplicate cookie detected
	[  +0.000062] FS-Cache: O-cookie c=00000017 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000055] FS-Cache: O-cookie d=000000008a09309a{9p.inode} n=000000004b032eea
	[  +0.000065] FS-Cache: O-key=[8] 'c85bc40400000000'
	[  +0.000035] FS-Cache: N-cookie c=00000020 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000042] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000047d46db5
	[  +0.000055] FS-Cache: N-key=[8] 'c85bc40400000000'
	[  +0.398634] FS-Cache: Duplicate cookie detected
	[  +0.000091] FS-Cache: O-cookie c=0000001a [p=00000014 fl=226 nc=0 na=1]
	[  +0.000050] FS-Cache: O-cookie d=000000008a09309a{9p.inode} n=000000004a75bbd2
	[  +0.000047] FS-Cache: O-key=[8] 'd35bc40400000000'
	[  +0.000054] FS-Cache: N-cookie c=00000021 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000051] FS-Cache: N-cookie d=000000008a09309a{9p.inode} n=0000000062f74fb0
	[  +0.000064] FS-Cache: N-key=[8] 'd35bc40400000000'
	
	* 
	* ==> etcd [1363d473b8a6] <==
	* {"level":"info","ts":"2023-02-24T01:30:21.884Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-24T01:30:21.884Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-24T01:30:21.884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-02-24T01:30:21.884Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-02-24T01:30:21.884Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:30:21.884Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T01:30:21.885Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-24T01:30:21.886Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-24T01:30:21.886Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T01:30:21.886Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-24T01:30:21.886Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-24T01:30:23.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-02-24T01:30:23.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-02-24T01:30:23.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-02-24T01:30:23.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-02-24T01:30:23.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-02-24T01:30:23.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-02-24T01:30:23.376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-02-24T01:30:23.377Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-238000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T01:30:23.377Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:30:23.377Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:30:23.377Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T01:30:23.377Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T01:30:23.379Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-02-24T01:30:23.379Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [39221d210e7b] <==
	* {"level":"info","ts":"2023-02-24T01:30:07.239Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-24T01:30:07.239Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-24T01:30:07.239Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-24T01:30:07.239Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-24T01:30:07.239Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T01:30:08.933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-02-24T01:30:08.933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-02-24T01:30:08.933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-02-24T01:30:08.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-02-24T01:30:08.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-02-24T01:30:08.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-02-24T01:30:08.934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-02-24T01:30:08.935Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-238000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T01:30:08.935Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:30:08.935Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:30:08.936Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T01:30:08.936Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T01:30:08.937Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T01:30:08.937Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-02-24T01:30:19.446Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-24T01:30:19.446Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-238000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-02-24T01:30:19.449Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-02-24T01:30:19.451Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-24T01:30:19.453Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-24T01:30:19.453Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-238000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  01:30:30 up  2:29,  0 users,  load average: 1.13, 1.38, 1.30
	Linux kubernetes-upgrade-238000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [91dc140ff8f3] <==
	* W0224 01:30:03.156741       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 01:30:04.886922       1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 01:30:04.990503       1 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0224 01:30:08.135100       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [9813c08201ff] <==
	* I0224 01:30:24.739298       1 autoregister_controller.go:141] Starting autoregister controller
	I0224 01:30:24.739334       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0224 01:30:24.739359       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0224 01:30:24.739364       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0224 01:30:24.741478       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0224 01:30:24.741491       1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
	I0224 01:30:24.741741       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0224 01:30:24.741804       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0224 01:30:24.762560       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 01:30:24.797260       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0224 01:30:24.838311       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 01:30:24.838584       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 01:30:24.838653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 01:30:24.838905       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 01:30:24.839003       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 01:30:24.839428       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 01:30:24.839484       1 cache.go:39] Caches are synced for autoregister controller
	I0224 01:30:24.842232       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 01:30:25.545848       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 01:30:25.740660       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 01:30:26.583369       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 01:30:26.594021       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 01:30:26.619399       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 01:30:26.637973       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 01:30:26.644013       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [77b123425204] <==
	* I0224 01:30:26.819368       1 controllermanager.go:622] Started "resourcequota"
	I0224 01:30:26.819393       1 resource_quota_controller.go:277] Starting resource quota controller
	I0224 01:30:26.819460       1 shared_informer.go:273] Waiting for caches to sync for resource quota
	I0224 01:30:26.819484       1 resource_quota_monitor.go:295] QuotaMonitor running
	I0224 01:30:26.832214       1 controllermanager.go:622] Started "job"
	I0224 01:30:26.832495       1 job_controller.go:191] Starting job controller
	I0224 01:30:26.832512       1 shared_informer.go:273] Waiting for caches to sync for job
	I0224 01:30:26.841590       1 controllermanager.go:622] Started "csrapproving"
	I0224 01:30:26.841780       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
	I0224 01:30:26.841828       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrapproving
	E0224 01:30:26.852896       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	W0224 01:30:26.852948       1 controllermanager.go:600] Skipping "service"
	I0224 01:30:26.856007       1 controllermanager.go:622] Started "pvc-protection"
	I0224 01:30:26.856536       1 pvc_protection_controller.go:99] "Starting PVC protection controller"
	I0224 01:30:26.856595       1 shared_informer.go:273] Waiting for caches to sync for PVC protection
	I0224 01:30:26.861200       1 controllermanager.go:622] Started "ttl-after-finished"
	I0224 01:30:26.861407       1 ttlafterfinished_controller.go:104] Starting TTL after finished controller
	I0224 01:30:26.861464       1 shared_informer.go:273] Waiting for caches to sync for TTL after finished
	I0224 01:30:26.887122       1 controllermanager.go:622] Started "namespace"
	I0224 01:30:26.887169       1 namespace_controller.go:195] Starting namespace controller
	I0224 01:30:26.887178       1 shared_informer.go:273] Waiting for caches to sync for namespace
	I0224 01:30:26.889520       1 controllermanager.go:622] Started "cronjob"
	I0224 01:30:26.889663       1 cronjob_controllerv2.go:137] "Starting cronjob controller v2"
	I0224 01:30:26.889722       1 shared_informer.go:273] Waiting for caches to sync for cronjob
	I0224 01:30:26.893700       1 shared_informer.go:280] Caches are synced for tokens
	
	* 
	* ==> kube-controller-manager [e712fda4ced6] <==
	* I0224 01:29:48.405672       1 serving.go:348] Generated self-signed cert in-memory
	I0224 01:29:48.895046       1 controllermanager.go:182] Version: v1.26.1
	I0224 01:29:48.895100       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:29:48.896079       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0224 01:29:48.896094       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0224 01:29:48.896195       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0224 01:29:48.896246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	F0224 01:30:09.142141       1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [517e485f9bf9] <==
	* W0224 01:30:17.038459       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:17.038510       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:17.167820       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:17.167881       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.76.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:17.326045       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:17.326101       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:17.343616       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:17.343666       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:17.589381       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:17.589432       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:17.676551       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:17.676606       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:17.942809       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:17.942866       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:18.274777       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:18.274831       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:18.293470       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:18.293521       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:18.730692       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:18.730740       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0224 01:30:19.152040       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:19.152098       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0224 01:30:19.449911       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 01:30:19.449933       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0224 01:30:19.450116       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [59f4bd533c76] <==
	* I0224 01:30:23.491635       1 serving.go:348] Generated self-signed cert in-memory
	I0224 01:30:24.792316       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0224 01:30:24.792356       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:30:24.795439       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0224 01:30:24.795499       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0224 01:30:24.795503       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 01:30:24.795510       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0224 01:30:24.795513       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 01:30:24.795582       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0224 01:30:24.795625       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0224 01:30:24.796055       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0224 01:30:24.896073       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 01:30:24.896702       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0224 01:30:24.897408       1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 01:25:05 UTC, end at Fri 2023-02-24 01:30:31 UTC. --
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: E0224 01:30:21.187847   12515 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224374   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8d51cd2bb6ea14ff026adbc7be2dbb-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-238000\" (UID: \"4f8d51cd2bb6ea14ff026adbc7be2dbb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224432   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8d51cd2bb6ea14ff026adbc7be2dbb-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-238000\" (UID: \"4f8d51cd2bb6ea14ff026adbc7be2dbb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224518   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8fbf9d17871cc5d70d551b13d441304-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-238000\" (UID: \"f8fbf9d17871cc5d70d551b13d441304\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224545   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8fbf9d17871cc5d70d551b13d441304-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-238000\" (UID: \"f8fbf9d17871cc5d70d551b13d441304\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224572   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8d51cd2bb6ea14ff026adbc7be2dbb-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-238000\" (UID: \"4f8d51cd2bb6ea14ff026adbc7be2dbb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224600   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8fbf9d17871cc5d70d551b13d441304-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-238000\" (UID: \"f8fbf9d17871cc5d70d551b13d441304\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224625   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3146f12449a4629710233db86a2f4b0e-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-238000\" (UID: \"3146f12449a4629710233db86a2f4b0e\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224648   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8d51cd2bb6ea14ff026adbc7be2dbb-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-238000\" (UID: \"4f8d51cd2bb6ea14ff026adbc7be2dbb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224673   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8d51cd2bb6ea14ff026adbc7be2dbb-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-238000\" (UID: \"4f8d51cd2bb6ea14ff026adbc7be2dbb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224726   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8fbf9d17871cc5d70d551b13d441304-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-238000\" (UID: \"f8fbf9d17871cc5d70d551b13d441304\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224814   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8fbf9d17871cc5d70d551b13d441304-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-238000\" (UID: \"f8fbf9d17871cc5d70d551b13d441304\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.224867   12515 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8fbf9d17871cc5d70d551b13d441304-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-238000\" (UID: \"f8fbf9d17871cc5d70d551b13d441304\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: E0224 01:30:21.426071   12515 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-238000?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:21.597889   12515 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: E0224 01:30:21.598544   12515 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-238000"
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: W0224 01:30:21.783469   12515 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-238000&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 24 01:30:21 kubernetes-upgrade-238000 kubelet[12515]: E0224 01:30:21.783555   12515 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-238000&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 24 01:30:22 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:22.009813   12515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="255d1f3ca1d31eaf16d3f4bd60064c9542b08f578245f085cb052a90782734cf"
	Feb 24 01:30:22 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:22.415018   12515 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-238000"
	Feb 24 01:30:24 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:24.807111   12515 apiserver.go:52] "Watching apiserver"
	Feb 24 01:30:24 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:24.824213   12515 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 24 01:30:24 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:24.875739   12515 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-238000"
	Feb 24 01:30:24 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:24.875840   12515 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-238000"
	Feb 24 01:30:24 kubernetes-upgrade-238000 kubelet[12515]: I0224 01:30:24.894858   12515 reconciler.go:41] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-238000 -n kubernetes-upgrade-238000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-238000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-238000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-238000 describe pod storage-provisioner: exit status 1 (59.358606ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-238000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-238000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-238000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-238000: (2.989921901s)
--- FAIL: TestKubernetesUpgrade (583.09s)

                                                
                                    
x
+
TestMissingContainerUpgrade (61.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.165048681.exe start -p missing-upgrade-713000 --memory=2200 --driver=docker 
E0223 17:20:22.350000   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:22.356386   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:22.366495   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:22.386590   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:22.428741   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:22.509306   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:22.670095   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:22.990635   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:23.631484   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:24.913856   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:27.474070   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:20:32.594866   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.165048681.exe start -p missing-upgrade-713000 --memory=2200 --driver=docker : exit status 78 (45.084691976s)

                                                
                                                
-- stdout --
	* [missing-upgrade-713000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-713000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-713000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.52 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.01 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 43.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 70.05 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 79.53 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 114.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 128.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 142.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 171.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 183.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 209.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 223.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 236.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 273.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 296.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 306.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 321.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 329.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 360.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 370.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 394.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 409.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 433.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 441.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 452.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 464.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 505.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 517.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 527.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:20:17.951020443 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-713000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:20:37.133357150 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.165048681.exe start -p missing-upgrade-713000 --memory=2200 --driver=docker 
E0223 17:20:42.836387   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.165048681.exe start -p missing-upgrade-713000 --memory=2200 --driver=docker : exit status 70 (3.959592287s)

                                                
                                                
-- stdout --
	* [missing-upgrade-713000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-713000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-713000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.165048681.exe start -p missing-upgrade-713000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.165048681.exe start -p missing-upgrade-713000 --memory=2200 --driver=docker : exit status 70 (3.884274913s)

                                                
                                                
-- stdout --
	* [missing-upgrade-713000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-713000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-713000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-02-23 17:20:49.625554 -0800 PST m=+2424.978929978
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-713000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-713000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e413aea6ea04df809e58821d83abf3b12918e155e6ce884f5dca102a31df8f5",
	        "Created": "2023-02-24T01:20:26.126131907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 560364,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:20:26.354692859Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/7e413aea6ea04df809e58821d83abf3b12918e155e6ce884f5dca102a31df8f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e413aea6ea04df809e58821d83abf3b12918e155e6ce884f5dca102a31df8f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e413aea6ea04df809e58821d83abf3b12918e155e6ce884f5dca102a31df8f5/hosts",
	        "LogPath": "/var/lib/docker/containers/7e413aea6ea04df809e58821d83abf3b12918e155e6ce884f5dca102a31df8f5/7e413aea6ea04df809e58821d83abf3b12918e155e6ce884f5dca102a31df8f5-json.log",
	        "Name": "/missing-upgrade-713000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-713000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1e51beb7768169b17c7b1a09052d47b1508b16983e0e9a1dd91363f0a42be25e-init/diff:/var/lib/docker/overlay2/68b896e9e4b459964811ab116402ef03a6fece615c7f43c5a371b1fdccd87dc4/diff:/var/lib/docker/overlay2/6702894163c72587d4907ecbd3979d49838df9b235f6a7228816d07b9fee149e/diff:/var/lib/docker/overlay2/ada850987ad57f5b28c128c8163d0f2bec6b7de15e2639e553a0e02662b7a179/diff:/var/lib/docker/overlay2/641c086c35a9e1c6d7c63417e0cf190c35b85f9bd78df78a520ebc5950a561ee/diff:/var/lib/docker/overlay2/4d2461abf0ec25533691697489a139e6fb8b03ccbb7f89f95754ca3f136c45da/diff:/var/lib/docker/overlay2/466c324bfc28381e4fe6b2ceca55fd20ff500f16661d042c47071e367cafcdb8/diff:/var/lib/docker/overlay2/8be1fb96dfea68a547b7d249deee8a7438eb93160ca695efe5dc9eed2566dea3/diff:/var/lib/docker/overlay2/45249acd1b8805ba66680057a4dd9054c77c55f55ade363cf3710e1e57ac48bf/diff:/var/lib/docker/overlay2/01c9031850d5514f35753e13ece571ee1373089da11d8fba72282305e82faab1/diff:/var/lib/docker/overlay2/17da1d
5e19283c033a7ecea4fcd260aea844c9210c0d2a6cae701fdf7e3aab00/diff:/var/lib/docker/overlay2/d1addcd04da3d5d33698ba4af2c65dcbf116ab8d72caeb762d1b64e94e4e0377/diff:/var/lib/docker/overlay2/7c3c030d464d341b90c3c19d87bebb57512eb1b9381aed11c2f10349a7f0a544/diff:/var/lib/docker/overlay2/ddadc1040bac01a8e198b8577151852cb82b023454be50f2e6bdc37d3361fffc/diff:/var/lib/docker/overlay2/9e36b6ff5361ec746a15d0cf9f6a246f474393b156feb86be61c7336bd2e000f/diff:/var/lib/docker/overlay2/e7c79d81776cebe7978369bebf06bca9f7e076cc2140582d405c502bac6ec766/diff:/var/lib/docker/overlay2/ecafb5388e95add1f5e85d827eefdb04a6c9496c54e2305970319e64813ee7e4/diff:/var/lib/docker/overlay2/c95a2c14807a942da9d61633f9c21ece1a0d9a2215c2cca053e6a313fe15ee69/diff:/var/lib/docker/overlay2/462b12ee23b2bacc77ce13b3c4ebca18de99931e301ea20ddcf91f66fd51e98d/diff:/var/lib/docker/overlay2/66e33aca4c3abeb5ab250c4acbea655d919447d419bc9f676ad87de9723cf3d1/diff:/var/lib/docker/overlay2/bf3b3864fa03107e8dbc5202b32d4a19deba149f01ad111a4d653ab49f8f9548/diff:/var/lib/d
ocker/overlay2/66ed2cde4734b96e481801ca8f7e0575283cc7121014ef962d4d47acccae9087/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e51beb7768169b17c7b1a09052d47b1508b16983e0e9a1dd91363f0a42be25e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e51beb7768169b17c7b1a09052d47b1508b16983e0e9a1dd91363f0a42be25e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e51beb7768169b17c7b1a09052d47b1508b16983e0e9a1dd91363f0a42be25e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-713000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-713000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-713000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-713000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-713000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e60bb10172851891f79bae4bbf3e781986b2dacfa5ef5b88ec5017d8192535df",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59484"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59485"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e60bb1017285",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "b47271da05ef6b43089c451586e2a25b119ecdf7b0c7a25491854d2f1511f8c3",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "c2dba025af1153f939b032c53cf1836b587b950a1d8caa75a6cb026c36a7f3c9",
	                    "EndpointID": "b47271da05ef6b43089c451586e2a25b119ecdf7b0c7a25491854d2f1511f8c3",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-713000 -n missing-upgrade-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-713000 -n missing-upgrade-713000: exit status 6 (378.715109ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:20:50.050742   35645 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-713000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-713000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-713000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-713000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-713000: (2.305051879s)
--- FAIL: TestMissingContainerUpgrade (61.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (61.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.473468892.exe start -p stopped-upgrade-739000 --memory=2200 --vm-driver=docker 
E0223 17:21:44.277983   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:21:58.800258   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.473468892.exe start -p stopped-upgrade-739000 --memory=2200 --vm-driver=docker : exit status 70 (50.328568878s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-739000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1334151074
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:22:10.756444913 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-739000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:22:30.085544611 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-739000", then "minikube start -p stopped-upgrade-739000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.13 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.97 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.30 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 23.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 30.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.08 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.37 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 59.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 65.08 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 73.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 84.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 93.33 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 113.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 121.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 153.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 158.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 188.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 206.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 212.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 231.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 243.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 255.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 265.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 278.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 290.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 312.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 325.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 333.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 341.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 347.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 357.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 370.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 378.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 393.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 407.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 412.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 418.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 425.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 435.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 444.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 453.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 462.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 468.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 475.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 490.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 497.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 506.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 524.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 531.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:22:30.085544611 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.473468892.exe start -p stopped-upgrade-739000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.473468892.exe start -p stopped-upgrade-739000 --memory=2200 --vm-driver=docker : exit status 70 (4.31568509s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-739000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3199279686
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-739000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.473468892.exe start -p stopped-upgrade-739000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.473468892.exe start -p stopped-upgrade-739000 --memory=2200 --vm-driver=docker : exit status 70 (4.258395977s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-739000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig296861294
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-739000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (61.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (252.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-977000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-977000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m11.70750917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-977000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-977000 in cluster old-k8s-version-977000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 17:35:05.111638   42515 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:35:05.111864   42515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:35:05.111869   42515 out.go:309] Setting ErrFile to fd 2...
	I0223 17:35:05.111873   42515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:35:05.111988   42515 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:35:05.113410   42515 out.go:303] Setting JSON to false
	I0223 17:35:05.132765   42515 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9280,"bootTime":1677193225,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 17:35:05.132914   42515 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 17:35:05.154991   42515 out.go:177] * [old-k8s-version-977000] minikube v1.29.0 on Darwin 13.2
	I0223 17:35:05.197058   42515 notify.go:220] Checking for updates...
	I0223 17:35:05.218165   42515 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 17:35:05.260953   42515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:35:05.303010   42515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 17:35:05.344920   42515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 17:35:05.387074   42515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 17:35:05.429164   42515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 17:35:05.450953   42515 config.go:182] Loaded profile config "bridge-152000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:35:05.451066   42515 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 17:35:05.514639   42515 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 17:35:05.514765   42515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:35:05.664419   42515 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:35:05.567273022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:35:05.686187   42515 out.go:177] * Using the docker driver based on user configuration
	I0223 17:35:05.707076   42515 start.go:296] selected driver: docker
	I0223 17:35:05.707100   42515 start.go:857] validating driver "docker" against <nil>
	I0223 17:35:05.707117   42515 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 17:35:05.711110   42515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:35:05.857921   42515 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:35:05.762838571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:35:05.858043   42515 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 17:35:05.858209   42515 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 17:35:05.879761   42515 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 17:35:05.900638   42515 cni.go:84] Creating CNI manager for ""
	I0223 17:35:05.900771   42515 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 17:35:05.900791   42515 start_flags.go:319] config:
	{Name:old-k8s-version-977000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-977000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:35:05.943675   42515 out.go:177] * Starting control plane node old-k8s-version-977000 in cluster old-k8s-version-977000
	I0223 17:35:05.964764   42515 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:35:05.985749   42515 out.go:177] * Pulling base image ...
	I0223 17:35:06.027712   42515 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:35:06.027718   42515 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 17:35:06.027815   42515 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 17:35:06.027839   42515 cache.go:57] Caching tarball of preloaded images
	I0223 17:35:06.028124   42515 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:35:06.028143   42515 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 17:35:06.029277   42515 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/config.json ...
	I0223 17:35:06.029431   42515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/config.json: {Name:mke69de7ce09d338a385bae20c7027b755cb8332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:35:06.085560   42515 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:35:06.085580   42515 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:35:06.085600   42515 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:35:06.085639   42515 start.go:364] acquiring machines lock for old-k8s-version-977000: {Name:mk29826c7430a5f84af8ee3c20735d7dd9caf7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:35:06.085797   42515 start.go:368] acquired machines lock for "old-k8s-version-977000" in 146.084µs
	I0223 17:35:06.085830   42515 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-977000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-977000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 17:35:06.085910   42515 start.go:125] createHost starting for "" (driver="docker")
	I0223 17:35:06.144580   42515 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 17:35:06.144923   42515 start.go:159] libmachine.API.Create for "old-k8s-version-977000" (driver="docker")
	I0223 17:35:06.144998   42515 client.go:168] LocalClient.Create starting
	I0223 17:35:06.145172   42515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem
	I0223 17:35:06.145252   42515 main.go:141] libmachine: Decoding PEM data...
	I0223 17:35:06.145284   42515 main.go:141] libmachine: Parsing certificate...
	I0223 17:35:06.145410   42515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem
	I0223 17:35:06.145477   42515 main.go:141] libmachine: Decoding PEM data...
	I0223 17:35:06.145496   42515 main.go:141] libmachine: Parsing certificate...
	I0223 17:35:06.146222   42515 cli_runner.go:164] Run: docker network inspect old-k8s-version-977000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 17:35:06.204335   42515 cli_runner.go:211] docker network inspect old-k8s-version-977000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 17:35:06.204442   42515 network_create.go:281] running [docker network inspect old-k8s-version-977000] to gather additional debugging logs...
	I0223 17:35:06.204460   42515 cli_runner.go:164] Run: docker network inspect old-k8s-version-977000
	W0223 17:35:06.264019   42515 cli_runner.go:211] docker network inspect old-k8s-version-977000 returned with exit code 1
	I0223 17:35:06.264045   42515 network_create.go:284] error running [docker network inspect old-k8s-version-977000]: docker network inspect old-k8s-version-977000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-977000
	I0223 17:35:06.264056   42515 network_create.go:286] output of [docker network inspect old-k8s-version-977000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-977000
	
	** /stderr **
	I0223 17:35:06.264159   42515 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 17:35:06.325263   42515 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 17:35:06.325633   42515 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006df080}
	I0223 17:35:06.325647   42515 network_create.go:123] attempt to create docker network old-k8s-version-977000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 17:35:06.325716   42515 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-977000 old-k8s-version-977000
	W0223 17:35:06.384171   42515 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-977000 old-k8s-version-977000 returned with exit code 1
	W0223 17:35:06.384202   42515 network_create.go:148] failed to create docker network old-k8s-version-977000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-977000 old-k8s-version-977000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 17:35:06.384239   42515 network_create.go:115] failed to create docker network old-k8s-version-977000 192.168.58.0/24, will retry: subnet is taken
	I0223 17:35:06.385582   42515 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 17:35:06.385965   42515 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010e6570}
	I0223 17:35:06.385984   42515 network_create.go:123] attempt to create docker network old-k8s-version-977000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 17:35:06.386054   42515 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-977000 old-k8s-version-977000
	W0223 17:35:06.443841   42515 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-977000 old-k8s-version-977000 returned with exit code 1
	W0223 17:35:06.443875   42515 network_create.go:148] failed to create docker network old-k8s-version-977000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-977000 old-k8s-version-977000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 17:35:06.443890   42515 network_create.go:115] failed to create docker network old-k8s-version-977000 192.168.67.0/24, will retry: subnet is taken
	I0223 17:35:06.445253   42515 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 17:35:06.445580   42515 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00059ec80}
	I0223 17:35:06.445593   42515 network_create.go:123] attempt to create docker network old-k8s-version-977000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 17:35:06.445666   42515 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-977000 old-k8s-version-977000
	I0223 17:35:06.563613   42515 network_create.go:107] docker network old-k8s-version-977000 192.168.76.0/24 created
	I0223 17:35:06.563656   42515 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-977000" container
	I0223 17:35:06.563803   42515 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 17:35:06.635036   42515 cli_runner.go:164] Run: docker volume create old-k8s-version-977000 --label name.minikube.sigs.k8s.io=old-k8s-version-977000 --label created_by.minikube.sigs.k8s.io=true
	I0223 17:35:06.695768   42515 oci.go:103] Successfully created a docker volume old-k8s-version-977000
	I0223 17:35:06.695883   42515 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-977000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-977000 --entrypoint /usr/bin/test -v old-k8s-version-977000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 17:35:07.231394   42515 oci.go:107] Successfully prepared a docker volume old-k8s-version-977000
	I0223 17:35:07.231430   42515 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 17:35:07.231444   42515 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 17:35:07.231562   42515 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-977000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 17:35:14.156474   42515 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-977000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.92480532s)
	I0223 17:35:14.156495   42515 kic.go:199] duration metric: took 6.925019 seconds to extract preloaded images to volume
	I0223 17:35:14.156623   42515 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 17:35:14.314443   42515 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-977000 --name old-k8s-version-977000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-977000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-977000 --network old-k8s-version-977000 --ip 192.168.76.2 --volume old-k8s-version-977000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 17:35:14.720287   42515 cli_runner.go:164] Run: docker container inspect old-k8s-version-977000 --format={{.State.Running}}
	I0223 17:35:14.789136   42515 cli_runner.go:164] Run: docker container inspect old-k8s-version-977000 --format={{.State.Status}}
	I0223 17:35:14.860525   42515 cli_runner.go:164] Run: docker exec old-k8s-version-977000 stat /var/lib/dpkg/alternatives/iptables
	I0223 17:35:14.980411   42515 oci.go:144] the created container "old-k8s-version-977000" has a running status.
	I0223 17:35:14.980451   42515 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa...
	I0223 17:35:15.135838   42515 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 17:35:15.241261   42515 cli_runner.go:164] Run: docker container inspect old-k8s-version-977000 --format={{.State.Status}}
	I0223 17:35:15.302006   42515 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 17:35:15.302026   42515 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-977000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 17:35:15.408013   42515 cli_runner.go:164] Run: docker container inspect old-k8s-version-977000 --format={{.State.Status}}
	I0223 17:35:15.468400   42515 machine.go:88] provisioning docker machine ...
	I0223 17:35:15.468443   42515 ubuntu.go:169] provisioning hostname "old-k8s-version-977000"
	I0223 17:35:15.468561   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:15.534678   42515 main.go:141] libmachine: Using SSH client type: native
	I0223 17:35:15.535105   42515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61624 <nil> <nil>}
	I0223 17:35:15.535119   42515 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-977000 && echo "old-k8s-version-977000" | sudo tee /etc/hostname
	I0223 17:35:15.677217   42515 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-977000
	
	I0223 17:35:15.677331   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:15.742846   42515 main.go:141] libmachine: Using SSH client type: native
	I0223 17:35:15.743230   42515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61624 <nil> <nil>}
	I0223 17:35:15.743242   42515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-977000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-977000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-977000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:35:15.877806   42515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:35:15.877835   42515 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:35:15.877851   42515 ubuntu.go:177] setting up certificates
	I0223 17:35:15.877861   42515 provision.go:83] configureAuth start
	I0223 17:35:15.877958   42515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-977000
	I0223 17:35:15.939838   42515 provision.go:138] copyHostCerts
	I0223 17:35:15.939944   42515 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:35:15.939954   42515 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:35:15.940053   42515 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:35:15.940247   42515 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:35:15.940253   42515 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:35:15.940332   42515 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:35:15.940507   42515 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:35:15.940513   42515 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:35:15.940570   42515 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:35:15.940698   42515 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-977000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-977000]
	I0223 17:35:16.089397   42515 provision.go:172] copyRemoteCerts
	I0223 17:35:16.089496   42515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:35:16.089551   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:16.146611   42515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61624 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:35:16.243942   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 17:35:16.264325   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:35:16.285160   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0223 17:35:16.304828   42515 provision.go:86] duration metric: configureAuth took 426.95167ms
	I0223 17:35:16.304843   42515 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:35:16.304996   42515 config.go:182] Loaded profile config "old-k8s-version-977000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 17:35:16.305065   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:16.363721   42515 main.go:141] libmachine: Using SSH client type: native
	I0223 17:35:16.364082   42515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61624 <nil> <nil>}
	I0223 17:35:16.364098   42515 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:35:16.497590   42515 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:35:16.497619   42515 ubuntu.go:71] root file system type: overlay
	I0223 17:35:16.497828   42515 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:35:16.497905   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:16.563338   42515 main.go:141] libmachine: Using SSH client type: native
	I0223 17:35:16.563697   42515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61624 <nil> <nil>}
	I0223 17:35:16.563750   42515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:35:16.705680   42515 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:35:16.705770   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:16.762270   42515 main.go:141] libmachine: Using SSH client type: native
	I0223 17:35:16.762621   42515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61624 <nil> <nil>}
	I0223 17:35:16.762641   42515 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:35:17.381606   42515 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 01:35:16.704089528 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 17:35:17.381624   42515 machine.go:91] provisioned docker machine in 1.913195128s
	I0223 17:35:17.381629   42515 client.go:171] LocalClient.Create took 11.236572025s
	I0223 17:35:17.381645   42515 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-977000" took 11.236675601s
	I0223 17:35:17.381654   42515 start.go:300] post-start starting for "old-k8s-version-977000" (driver="docker")
	I0223 17:35:17.381658   42515 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:35:17.381725   42515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:35:17.381784   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:17.440265   42515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61624 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:35:17.533313   42515 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:35:17.537119   42515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:35:17.537138   42515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:35:17.537146   42515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:35:17.537150   42515 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:35:17.537162   42515 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:35:17.537253   42515 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:35:17.537413   42515 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:35:17.537576   42515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:35:17.545280   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:35:17.562825   42515 start.go:303] post-start completed in 181.154814ms
	I0223 17:35:17.563360   42515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-977000
	I0223 17:35:17.620215   42515 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/config.json ...
	I0223 17:35:17.620629   42515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:35:17.620694   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:17.676411   42515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61624 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:35:17.766403   42515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:35:17.771026   42515 start.go:128] duration metric: createHost completed in 11.685056264s
	I0223 17:35:17.771044   42515 start.go:83] releasing machines lock for "old-k8s-version-977000", held for 11.685187325s
	I0223 17:35:17.771140   42515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-977000
	I0223 17:35:17.827103   42515 ssh_runner.go:195] Run: cat /version.json
	I0223 17:35:17.827119   42515 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 17:35:17.827177   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:17.827207   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:17.886823   42515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61624 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:35:17.886991   42515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61624 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:35:18.227026   42515 ssh_runner.go:195] Run: systemctl --version
	I0223 17:35:18.231940   42515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 17:35:18.237000   42515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 17:35:18.257029   42515 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 17:35:18.257107   42515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 17:35:18.271235   42515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 17:35:18.279142   42515 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 17:35:18.279155   42515 start.go:485] detecting cgroup driver to use...
	I0223 17:35:18.279168   42515 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:35:18.279253   42515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:35:18.292733   42515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 17:35:18.301396   42515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:35:18.310185   42515 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:35:18.310245   42515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:35:18.318883   42515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:35:18.327551   42515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:35:18.335993   42515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:35:18.344514   42515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:35:18.352647   42515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:35:18.361362   42515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:35:18.368822   42515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:35:18.376103   42515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:35:18.442475   42515 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:35:18.518693   42515 start.go:485] detecting cgroup driver to use...
	I0223 17:35:18.518711   42515 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:35:18.518772   42515 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:35:18.529501   42515 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:35:18.529570   42515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:35:18.539602   42515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:35:18.553963   42515 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:35:18.626054   42515 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:35:18.715120   42515 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:35:18.715137   42515 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:35:18.728566   42515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:35:18.823182   42515 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:35:19.044247   42515 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:35:19.071062   42515 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:35:19.121063   42515 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0223 17:35:19.121218   42515 cli_runner.go:164] Run: docker exec -t old-k8s-version-977000 dig +short host.docker.internal
	I0223 17:35:19.240245   42515 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:35:19.240359   42515 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:35:19.245011   42515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:35:19.254961   42515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:35:19.312090   42515 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 17:35:19.312168   42515 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:35:19.331932   42515 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 17:35:19.331949   42515 docker.go:560] Images already preloaded, skipping extraction
	I0223 17:35:19.332048   42515 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:35:19.352808   42515 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 17:35:19.352822   42515 cache_images.go:84] Images are preloaded, skipping loading
	I0223 17:35:19.352916   42515 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:35:19.379389   42515 cni.go:84] Creating CNI manager for ""
	I0223 17:35:19.379407   42515 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 17:35:19.379423   42515 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:35:19.379438   42515 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-977000 NodeName:old-k8s-version-977000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:35:19.379547   42515 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-977000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-977000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:35:19.379621   42515 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-977000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-977000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 17:35:19.379681   42515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 17:35:19.387571   42515 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:35:19.387631   42515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 17:35:19.395162   42515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0223 17:35:19.408213   42515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:35:19.421335   42515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0223 17:35:19.434723   42515 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:35:19.438828   42515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:35:19.448744   42515 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000 for IP: 192.168.76.2
	I0223 17:35:19.448760   42515 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:35:19.448945   42515 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:35:19.449007   42515 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:35:19.449051   42515 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/client.key
	I0223 17:35:19.449065   42515 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/client.crt with IP's: []
	I0223 17:35:19.504927   42515 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/client.crt ...
	I0223 17:35:19.504938   42515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/client.crt: {Name:mk25f5b99fd6f5d98a7ff9604e345f5d7a74ed89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:35:19.510628   42515 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/client.key ...
	I0223 17:35:19.510642   42515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/client.key: {Name:mk825e8e66f5ba3485b95521c8861d12fdee854f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:35:19.532707   42515 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.key.31bdca25
	I0223 17:35:19.532730   42515 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 17:35:19.682158   42515 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.crt.31bdca25 ...
	I0223 17:35:19.682169   42515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.crt.31bdca25: {Name:mkb82aff22048118a7e5a3f763c71d474a315069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:35:19.682437   42515 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.key.31bdca25 ...
	I0223 17:35:19.682455   42515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.key.31bdca25: {Name:mk643651ff44dda1ccf3db3c6954109c0858c6fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:35:19.682649   42515 certs.go:333] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.crt
	I0223 17:35:19.682818   42515 certs.go:337] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.key
	I0223 17:35:19.682972   42515 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.key
	I0223 17:35:19.682987   42515 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.crt with IP's: []
	I0223 17:35:19.995443   42515 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.crt ...
	I0223 17:35:19.995458   42515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.crt: {Name:mk93432dd4103fea3a1493f297bc718430a4ed85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:35:19.995735   42515 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.key ...
	I0223 17:35:19.995745   42515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.key: {Name:mk38992f7a9000e57e23c188f76f18a7a0879237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:35:19.996133   42515 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:35:19.996183   42515 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:35:19.996194   42515 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:35:19.996230   42515 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:35:19.996275   42515 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:35:19.996307   42515 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:35:19.996376   42515 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:35:19.996863   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 17:35:20.015783   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 17:35:20.033928   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 17:35:20.051633   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 17:35:20.069251   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:35:20.086891   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:35:20.104570   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:35:20.122129   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:35:20.151706   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:35:20.171086   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:35:20.188585   42515 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:35:20.205896   42515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 17:35:20.219405   42515 ssh_runner.go:195] Run: openssl version
	I0223 17:35:20.225063   42515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:35:20.233328   42515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:35:20.237359   42515 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:35:20.237407   42515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:35:20.243052   42515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:35:20.251393   42515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:35:20.259667   42515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:35:20.263763   42515 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:35:20.263815   42515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:35:20.269319   42515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:35:20.277484   42515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:35:20.285572   42515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:35:20.289596   42515 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:35:20.289648   42515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:35:20.294965   42515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:35:20.303081   42515 kubeadm.go:401] StartCluster: {Name:old-k8s-version-977000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-977000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:35:20.303194   42515 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:35:20.322153   42515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 17:35:20.330046   42515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 17:35:20.337743   42515 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 17:35:20.337798   42515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:35:20.345082   42515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:35:20.345108   42515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 17:35:20.393850   42515 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 17:35:20.393898   42515 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 17:35:20.562940   42515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:35:20.563013   42515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:35:20.563090   42515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:35:20.714621   42515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:35:20.715399   42515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:35:20.721681   42515 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 17:35:20.783446   42515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:35:20.807017   42515 out.go:204]   - Generating certificates and keys ...
	I0223 17:35:20.807133   42515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 17:35:20.807204   42515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 17:35:20.914101   42515 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 17:35:21.035171   42515 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 17:35:21.159469   42515 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 17:35:21.438776   42515 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 17:35:21.707307   42515 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 17:35:21.707465   42515 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-977000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0223 17:35:21.909944   42515 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 17:35:21.910068   42515 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-977000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0223 17:35:21.949799   42515 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 17:35:22.059309   42515 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 17:35:22.210733   42515 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 17:35:22.210902   42515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:35:22.299785   42515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:35:22.450892   42515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:35:22.666850   42515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:35:22.802309   42515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:35:22.802849   42515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:35:22.824750   42515 out.go:204]   - Booting up control plane ...
	I0223 17:35:22.824865   42515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:35:22.824943   42515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:35:22.825001   42515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:35:22.825072   42515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:35:22.825216   42515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:36:02.812154   42515 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 17:36:02.813555   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:36:02.813787   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:36:07.815346   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:36:07.815582   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:36:17.815744   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:36:17.816284   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:36:37.818010   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:36:37.818270   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:37:17.848084   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:37:17.848335   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:37:17.848349   42515 kubeadm.go:322] 
	I0223 17:37:17.848387   42515 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 17:37:17.848448   42515 kubeadm.go:322] 	timed out waiting for the condition
	I0223 17:37:17.848467   42515 kubeadm.go:322] 
	I0223 17:37:17.848523   42515 kubeadm.go:322] This error is likely caused by:
	I0223 17:37:17.848573   42515 kubeadm.go:322] 	- The kubelet is not running
	I0223 17:37:17.848715   42515 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 17:37:17.848729   42515 kubeadm.go:322] 
	I0223 17:37:17.848861   42515 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 17:37:17.848901   42515 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 17:37:17.848939   42515 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 17:37:17.848947   42515 kubeadm.go:322] 
	I0223 17:37:17.849071   42515 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 17:37:17.849195   42515 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 17:37:17.849293   42515 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 17:37:17.849364   42515 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 17:37:17.849467   42515 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 17:37:17.849506   42515 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 17:37:17.851574   42515 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 17:37:17.851648   42515 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 17:37:17.851767   42515 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 17:37:17.851904   42515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:37:17.851976   42515 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 17:37:17.852043   42515 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 17:37:17.852192   42515 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-977000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-977000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-977000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-977000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 17:37:17.852223   42515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 17:37:18.264398   42515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:37:18.274426   42515 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 17:37:18.274490   42515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:37:18.282141   42515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:37:18.282169   42515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 17:37:18.329445   42515 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 17:37:18.329489   42515 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 17:37:18.494754   42515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:37:18.494833   42515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:37:18.494904   42515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:37:18.648652   42515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:37:18.649533   42515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:37:18.656191   42515 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 17:37:18.733848   42515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:37:18.755628   42515 out.go:204]   - Generating certificates and keys ...
	I0223 17:37:18.755759   42515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 17:37:18.755837   42515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 17:37:18.755917   42515 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 17:37:18.755973   42515 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 17:37:18.756052   42515 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 17:37:18.756106   42515 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 17:37:18.756180   42515 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 17:37:18.756250   42515 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 17:37:18.756323   42515 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 17:37:18.756406   42515 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 17:37:18.756464   42515 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 17:37:18.756558   42515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:37:18.796742   42515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:37:18.942455   42515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:37:19.161236   42515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:37:19.257584   42515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:37:19.258046   42515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:37:19.282421   42515 out.go:204]   - Booting up control plane ...
	I0223 17:37:19.282575   42515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:37:19.282752   42515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:37:19.282927   42515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:37:19.283079   42515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:37:19.283370   42515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:37:59.282295   42515 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 17:37:59.283180   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:37:59.283400   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:38:04.285287   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:38:04.285493   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:38:14.287696   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:38:14.287947   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:38:34.289998   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:38:34.290236   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:39:14.292823   42515 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:39:14.293040   42515 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:39:14.293051   42515 kubeadm.go:322] 
	I0223 17:39:14.293091   42515 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 17:39:14.293148   42515 kubeadm.go:322] 	timed out waiting for the condition
	I0223 17:39:14.293163   42515 kubeadm.go:322] 
	I0223 17:39:14.293201   42515 kubeadm.go:322] This error is likely caused by:
	I0223 17:39:14.293249   42515 kubeadm.go:322] 	- The kubelet is not running
	I0223 17:39:14.293364   42515 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 17:39:14.293377   42515 kubeadm.go:322] 
	I0223 17:39:14.293494   42515 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 17:39:14.293534   42515 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 17:39:14.293568   42515 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 17:39:14.293573   42515 kubeadm.go:322] 
	I0223 17:39:14.293705   42515 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 17:39:14.293822   42515 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 17:39:14.293923   42515 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 17:39:14.293989   42515 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 17:39:14.294078   42515 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 17:39:14.294139   42515 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 17:39:14.297062   42515 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 17:39:14.297143   42515 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 17:39:14.297248   42515 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 17:39:14.297326   42515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:39:14.297391   42515 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 17:39:14.297451   42515 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 17:39:14.297478   42515 kubeadm.go:403] StartCluster complete in 3m53.946810607s
	I0223 17:39:14.297570   42515 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:39:14.318481   42515 logs.go:277] 0 containers: []
	W0223 17:39:14.318496   42515 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:39:14.318566   42515 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:39:14.338951   42515 logs.go:277] 0 containers: []
	W0223 17:39:14.338965   42515 logs.go:279] No container was found matching "etcd"
	I0223 17:39:14.339038   42515 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:39:14.359356   42515 logs.go:277] 0 containers: []
	W0223 17:39:14.359369   42515 logs.go:279] No container was found matching "coredns"
	I0223 17:39:14.359444   42515 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:39:14.378941   42515 logs.go:277] 0 containers: []
	W0223 17:39:14.378953   42515 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:39:14.379023   42515 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:39:14.398601   42515 logs.go:277] 0 containers: []
	W0223 17:39:14.398612   42515 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:39:14.398682   42515 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:39:14.418570   42515 logs.go:277] 0 containers: []
	W0223 17:39:14.418584   42515 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:39:14.418663   42515 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:39:14.438334   42515 logs.go:277] 0 containers: []
	W0223 17:39:14.438348   42515 logs.go:279] No container was found matching "kindnet"
	I0223 17:39:14.438355   42515 logs.go:123] Gathering logs for dmesg ...
	I0223 17:39:14.438364   42515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:39:14.450658   42515 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:39:14.450677   42515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:39:14.506165   42515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:39:14.506176   42515 logs.go:123] Gathering logs for Docker ...
	I0223 17:39:14.506183   42515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:39:14.530890   42515 logs.go:123] Gathering logs for container status ...
	I0223 17:39:14.530904   42515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:39:16.576969   42515 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046005625s)
	I0223 17:39:16.577075   42515 logs.go:123] Gathering logs for kubelet ...
	I0223 17:39:16.577083   42515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0223 17:39:16.614326   42515 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 17:39:16.614346   42515 out.go:239] * 
	* 
	W0223 17:39:16.614449   42515 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 17:39:16.614462   42515 out.go:239] * 
	* 
	W0223 17:39:16.615053   42515 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 17:39:16.657672   42515 out.go:177] 
	W0223 17:39:16.716027   42515 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 17:39:16.716148   42515 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 17:39:16.716228   42515 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 17:39:16.778588   42515 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-977000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-977000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-977000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c",
	        "Created": "2023-02-24T01:35:14.380880766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 660961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:35:14.709453937Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hostname",
	        "HostsPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hosts",
	        "LogPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c-json.log",
	        "Name": "/old-k8s-version-977000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-977000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-977000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-977000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-977000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-977000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3505e397ab80e9420d900918aec6e67a14534455d9486eac92fb062a100dc5c7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61624"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61625"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61626"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61627"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61628"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3505e397ab80",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-977000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "821104c1f595",
	                        "old-k8s-version-977000"
	                    ],
	                    "NetworkID": "b62760f23c198a189d083cf13177930b5af19fc8f1e171a0fb08c0832e6d6e8a",
	                    "EndpointID": "5d0c9d0869863374e75fabba63aa5547da0714b9a5633a5f32972176a42d8c14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 6 (398.155338ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:39:17.304935   43367 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-977000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-977000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (252.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-977000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-977000 create -f testdata/busybox.yaml: exit status 1 (35.254822ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-977000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-977000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-977000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-977000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c",
	        "Created": "2023-02-24T01:35:14.380880766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 660961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:35:14.709453937Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hostname",
	        "HostsPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hosts",
	        "LogPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c-json.log",
	        "Name": "/old-k8s-version-977000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-977000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-977000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-977000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-977000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-977000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3505e397ab80e9420d900918aec6e67a14534455d9486eac92fb062a100dc5c7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61624"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61625"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61626"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61627"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61628"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3505e397ab80",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-977000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "821104c1f595",
	                        "old-k8s-version-977000"
	                    ],
	                    "NetworkID": "b62760f23c198a189d083cf13177930b5af19fc8f1e171a0fb08c0832e6d6e8a",
	                    "EndpointID": "5d0c9d0869863374e75fabba63aa5547da0714b9a5633a5f32972176a42d8c14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 6 (391.614807ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:39:17.790521   43380 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-977000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-977000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-977000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-977000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c",
	        "Created": "2023-02-24T01:35:14.380880766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 660961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:35:14.709453937Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hostname",
	        "HostsPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hosts",
	        "LogPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c-json.log",
	        "Name": "/old-k8s-version-977000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-977000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-977000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-977000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-977000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-977000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3505e397ab80e9420d900918aec6e67a14534455d9486eac92fb062a100dc5c7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61624"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61625"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61626"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61627"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61628"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3505e397ab80",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-977000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "821104c1f595",
	                        "old-k8s-version-977000"
	                    ],
	                    "NetworkID": "b62760f23c198a189d083cf13177930b5af19fc8f1e171a0fb08c0832e6d6e8a",
	                    "EndpointID": "5d0c9d0869863374e75fabba63aa5547da0714b9a5633a5f32972176a42d8c14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 6 (393.495484ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:39:18.243221   43392 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-977000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-977000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-977000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0223 17:39:20.884975   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:39:21.056101   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:39:29.938657   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:29.944529   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:29.954632   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:29.974905   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:30.015104   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:30.095469   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:30.256721   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:30.272241   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:39:30.578020   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:31.219361   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:32.501620   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:33.388491   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:33.394312   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:33.406462   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:33.427503   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:33.467892   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:33.548225   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:33.708583   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:34.029068   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:34.670139   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:35.062687   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:35.950570   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:38.511943   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:40.183296   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:43.633464   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:39:44.865634   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 17:39:50.423804   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:39:53.422898   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
E0223 17:39:53.873996   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:40:10.904558   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:40:14.354988   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:40:21.110128   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
E0223 17:40:22.403215   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:40:42.978381   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:40:51.867777   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:40:52.194247   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:40:55.316030   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-977000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m39.431065404s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-977000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-977000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-977000 describe deploy/metrics-server -n kube-system: exit status 1 (36.3835ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-977000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-977000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-977000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-977000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c",
	        "Created": "2023-02-24T01:35:14.380880766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 660961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:35:14.709453937Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hostname",
	        "HostsPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hosts",
	        "LogPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c-json.log",
	        "Name": "/old-k8s-version-977000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-977000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-977000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-977000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-977000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-977000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3505e397ab80e9420d900918aec6e67a14534455d9486eac92fb062a100dc5c7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61624"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61625"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61626"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61627"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61628"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3505e397ab80",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-977000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "821104c1f595",
	                        "old-k8s-version-977000"
	                    ],
	                    "NetworkID": "b62760f23c198a189d083cf13177930b5af19fc8f1e171a0fb08c0832e6d6e8a",
	                    "EndpointID": "5d0c9d0869863374e75fabba63aa5547da0714b9a5633a5f32972176a42d8c14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 6 (396.327784ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:40:58.169376   43520 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-977000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-977000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (496.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-977000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0223 17:41:22.165912   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:41:24.155949   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:41:36.979873   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:41:51.842510   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:42:04.729107   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:42:13.790427   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:42:17.238106   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:42:59.138081   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:43:08.354359   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:43:12.574557   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:43:26.822373   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:43:36.038071   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:43:55.802937   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 17:44:27.925939   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 17:44:29.945572   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:44:33.393477   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:44:44.873076   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 17:44:53.429458   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
E0223 17:44:57.634668   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:45:01.083514   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:45:22.409989   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-977000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m11.787151522s)

                                                
                                                
-- stdout --
	* [old-k8s-version-977000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-977000 in cluster old-k8s-version-977000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-977000" ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 17:41:00.187100   43550 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:41:00.187255   43550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:41:00.187260   43550 out.go:309] Setting ErrFile to fd 2...
	I0223 17:41:00.187264   43550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:41:00.187382   43550 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:41:00.188732   43550 out.go:303] Setting JSON to false
	I0223 17:41:00.208101   43550 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9635,"bootTime":1677193225,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 17:41:00.208192   43550 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 17:41:00.229272   43550 out.go:177] * [old-k8s-version-977000] minikube v1.29.0 on Darwin 13.2
	I0223 17:41:00.271323   43550 notify.go:220] Checking for updates...
	I0223 17:41:00.292122   43550 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 17:41:00.313459   43550 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:41:00.335414   43550 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 17:41:00.356382   43550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 17:41:00.377432   43550 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 17:41:00.398465   43550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 17:41:00.419389   43550 config.go:182] Loaded profile config "old-k8s-version-977000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 17:41:00.441444   43550 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0223 17:41:00.462438   43550 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 17:41:00.525118   43550 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 17:41:00.525254   43550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:41:00.668060   43550 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:41:00.575079433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:41:00.711591   43550 out.go:177] * Using the docker driver based on existing profile
	I0223 17:41:00.748519   43550 start.go:296] selected driver: docker
	I0223 17:41:00.748549   43550 start.go:857] validating driver "docker" against &{Name:old-k8s-version-977000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-977000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:41:00.748639   43550 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 17:41:00.752049   43550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:41:00.894472   43550 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:41:00.801879998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:41:00.894632   43550 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 17:41:00.894652   43550 cni.go:84] Creating CNI manager for ""
	I0223 17:41:00.894665   43550 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 17:41:00.894672   43550 start_flags.go:319] config:
	{Name:old-k8s-version-977000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-977000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:41:00.938666   43550 out.go:177] * Starting control plane node old-k8s-version-977000 in cluster old-k8s-version-977000
	I0223 17:41:00.959608   43550 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:41:00.981761   43550 out.go:177] * Pulling base image ...
	I0223 17:41:01.023628   43550 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 17:41:01.023689   43550 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:41:01.023728   43550 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 17:41:01.023750   43550 cache.go:57] Caching tarball of preloaded images
	I0223 17:41:01.023961   43550 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:41:01.023993   43550 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 17:41:01.025037   43550 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/config.json ...
	I0223 17:41:01.080962   43550 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:41:01.080977   43550 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:41:01.081002   43550 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:41:01.081037   43550 start.go:364] acquiring machines lock for old-k8s-version-977000: {Name:mk29826c7430a5f84af8ee3c20735d7dd9caf7e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:41:01.081129   43550 start.go:368] acquired machines lock for "old-k8s-version-977000" in 69.452µs
	I0223 17:41:01.081155   43550 start.go:96] Skipping create...Using existing machine configuration
	I0223 17:41:01.081163   43550 fix.go:55] fixHost starting: 
	I0223 17:41:01.081374   43550 cli_runner.go:164] Run: docker container inspect old-k8s-version-977000 --format={{.State.Status}}
	I0223 17:41:01.139057   43550 fix.go:103] recreateIfNeeded on old-k8s-version-977000: state=Stopped err=<nil>
	W0223 17:41:01.139087   43550 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 17:41:01.197602   43550 out.go:177] * Restarting existing docker container for "old-k8s-version-977000" ...
	I0223 17:41:01.218594   43550 cli_runner.go:164] Run: docker start old-k8s-version-977000
	I0223 17:41:01.563716   43550 cli_runner.go:164] Run: docker container inspect old-k8s-version-977000 --format={{.State.Status}}
	I0223 17:41:01.623357   43550 kic.go:426] container "old-k8s-version-977000" state is running.
	I0223 17:41:01.623932   43550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-977000
	I0223 17:41:01.686092   43550 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/config.json ...
	I0223 17:41:01.686472   43550 machine.go:88] provisioning docker machine ...
	I0223 17:41:01.686498   43550 ubuntu.go:169] provisioning hostname "old-k8s-version-977000"
	I0223 17:41:01.686587   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:01.761870   43550 main.go:141] libmachine: Using SSH client type: native
	I0223 17:41:01.762288   43550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61774 <nil> <nil>}
	I0223 17:41:01.762299   43550 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-977000 && echo "old-k8s-version-977000" | sudo tee /etc/hostname
	I0223 17:41:01.916834   43550 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-977000
	
	I0223 17:41:01.916927   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:01.977258   43550 main.go:141] libmachine: Using SSH client type: native
	I0223 17:41:01.977622   43550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61774 <nil> <nil>}
	I0223 17:41:01.977636   43550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-977000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-977000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-977000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:41:02.111483   43550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:41:02.111503   43550 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:41:02.111516   43550 ubuntu.go:177] setting up certificates
	I0223 17:41:02.111524   43550 provision.go:83] configureAuth start
	I0223 17:41:02.111603   43550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-977000
	I0223 17:41:02.169066   43550 provision.go:138] copyHostCerts
	I0223 17:41:02.169182   43550 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:41:02.169192   43550 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:41:02.169310   43550 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:41:02.169524   43550 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:41:02.169530   43550 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:41:02.169596   43550 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:41:02.169755   43550 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:41:02.169761   43550 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:41:02.169820   43550 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:41:02.169939   43550 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-977000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-977000]
	I0223 17:41:02.306819   43550 provision.go:172] copyRemoteCerts
	I0223 17:41:02.306937   43550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:41:02.306999   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:02.364484   43550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61774 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:41:02.460284   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:41:02.477970   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0223 17:41:02.495644   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 17:41:02.512948   43550 provision.go:86] duration metric: configureAuth took 401.40189ms
	I0223 17:41:02.512962   43550 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:41:02.513145   43550 config.go:182] Loaded profile config "old-k8s-version-977000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 17:41:02.513213   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:02.570883   43550 main.go:141] libmachine: Using SSH client type: native
	I0223 17:41:02.571283   43550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61774 <nil> <nil>}
	I0223 17:41:02.571292   43550 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:41:02.704978   43550 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:41:02.704992   43550 ubuntu.go:71] root file system type: overlay
	I0223 17:41:02.705097   43550 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:41:02.705185   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:02.764659   43550 main.go:141] libmachine: Using SSH client type: native
	I0223 17:41:02.765022   43550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61774 <nil> <nil>}
	I0223 17:41:02.765070   43550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:41:02.906358   43550 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:41:02.906458   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:02.964309   43550 main.go:141] libmachine: Using SSH client type: native
	I0223 17:41:02.964662   43550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61774 <nil> <nil>}
	I0223 17:41:02.964675   43550 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:41:03.099977   43550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:41:03.099992   43550 machine.go:91] provisioned docker machine in 1.413480141s
	I0223 17:41:03.100002   43550 start.go:300] post-start starting for "old-k8s-version-977000" (driver="docker")
	I0223 17:41:03.100009   43550 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:41:03.100073   43550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:41:03.100155   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:03.157958   43550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61774 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:41:03.252494   43550 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:41:03.256186   43550 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:41:03.256202   43550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:41:03.256209   43550 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:41:03.256213   43550 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:41:03.256220   43550 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:41:03.256310   43550 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:41:03.256490   43550 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:41:03.256682   43550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:41:03.264174   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:41:03.281557   43550 start.go:303] post-start completed in 181.540964ms
	I0223 17:41:03.281668   43550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:41:03.281718   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:03.338638   43550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61774 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:41:03.430112   43550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:41:03.435019   43550 fix.go:57] fixHost completed within 2.353803717s
	I0223 17:41:03.435036   43550 start.go:83] releasing machines lock for "old-k8s-version-977000", held for 2.353847527s
	I0223 17:41:03.435144   43550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-977000
	I0223 17:41:03.492004   43550 ssh_runner.go:195] Run: cat /version.json
	I0223 17:41:03.492034   43550 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 17:41:03.492079   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:03.492104   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:03.551445   43550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61774 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:41:03.551561   43550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61774 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/old-k8s-version-977000/id_rsa Username:docker}
	I0223 17:41:03.892540   43550 ssh_runner.go:195] Run: systemctl --version
	I0223 17:41:03.897693   43550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0223 17:41:03.902490   43550 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 17:41:03.902547   43550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 17:41:03.910353   43550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 17:41:03.917959   43550 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0223 17:41:03.917974   43550 start.go:485] detecting cgroup driver to use...
	I0223 17:41:03.917983   43550 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:41:03.918067   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:41:03.931332   43550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 17:41:03.940031   43550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:41:03.949189   43550 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:41:03.949249   43550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:41:03.958068   43550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:41:03.966720   43550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:41:03.975290   43550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:41:03.983974   43550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:41:03.991965   43550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:41:04.000579   43550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:41:04.007867   43550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:41:04.014904   43550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:41:04.094050   43550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:41:04.163070   43550 start.go:485] detecting cgroup driver to use...
	I0223 17:41:04.163089   43550 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:41:04.163173   43550 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:41:04.173926   43550 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:41:04.173994   43550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:41:04.184565   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:41:04.200126   43550 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:41:04.269972   43550 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:41:04.364673   43550 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:41:04.364691   43550 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:41:04.377922   43550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:41:04.468792   43550 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:41:04.771742   43550 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:41:04.798059   43550 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:41:04.866118   43550 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0223 17:41:04.866337   43550 cli_runner.go:164] Run: docker exec -t old-k8s-version-977000 dig +short host.docker.internal
	I0223 17:41:04.976112   43550 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:41:04.976238   43550 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:41:04.980823   43550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:41:04.990944   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:05.050497   43550 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 17:41:05.050570   43550 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:41:05.070541   43550 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 17:41:05.070558   43550 docker.go:560] Images already preloaded, skipping extraction
	I0223 17:41:05.070646   43550 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:41:05.092227   43550 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 17:41:05.092246   43550 cache_images.go:84] Images are preloaded, skipping loading
	I0223 17:41:05.092332   43550 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:41:05.118952   43550 cni.go:84] Creating CNI manager for ""
	I0223 17:41:05.118969   43550 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 17:41:05.118995   43550 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:41:05.119009   43550 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-977000 NodeName:old-k8s-version-977000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:41:05.119137   43550 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-977000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-977000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:41:05.119215   43550 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-977000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-977000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 17:41:05.119277   43550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 17:41:05.127389   43550 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:41:05.127478   43550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 17:41:05.135527   43550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0223 17:41:05.148687   43550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:41:05.161907   43550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0223 17:41:05.175078   43550 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:41:05.179133   43550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:41:05.189393   43550 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000 for IP: 192.168.76.2
	I0223 17:41:05.209539   43550 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:41:05.209711   43550 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:41:05.209780   43550 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:41:05.209886   43550 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/client.key
	I0223 17:41:05.209961   43550 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.key.31bdca25
	I0223 17:41:05.210024   43550 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.key
	I0223 17:41:05.210272   43550 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:41:05.210338   43550 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:41:05.210349   43550 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:41:05.210390   43550 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:41:05.210445   43550 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:41:05.210508   43550 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:41:05.210601   43550 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:41:05.211210   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 17:41:05.229462   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 17:41:05.246969   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 17:41:05.264551   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/old-k8s-version-977000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 17:41:05.281871   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:41:05.299779   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:41:05.317637   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:41:05.335760   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:41:05.353923   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:41:05.371200   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:41:05.389054   43550 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:41:05.406759   43550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 17:41:05.420383   43550 ssh_runner.go:195] Run: openssl version
	I0223 17:41:05.426082   43550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:41:05.434749   43550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:41:05.438920   43550 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:41:05.438971   43550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:41:05.444446   43550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:41:05.452144   43550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:41:05.460787   43550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:41:05.464984   43550 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:41:05.465031   43550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:41:05.470443   43550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:41:05.477878   43550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:41:05.486394   43550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:41:05.490610   43550 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:41:05.490665   43550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:41:05.496187   43550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:41:05.503909   43550 kubeadm.go:401] StartCluster: {Name:old-k8s-version-977000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-977000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:41:05.504027   43550 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:41:05.523030   43550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 17:41:05.531044   43550 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 17:41:05.531064   43550 kubeadm.go:633] restartCluster start
	I0223 17:41:05.531125   43550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 17:41:05.538551   43550 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:05.538631   43550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-977000
	I0223 17:41:05.598054   43550 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-977000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:41:05.598233   43550 kubeconfig.go:146] "old-k8s-version-977000" context is missing from /Users/jenkins/minikube-integration/15909-24428/kubeconfig - will repair!
	I0223 17:41:05.598565   43550 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/kubeconfig: {Name:mk7d15723b32e59bb8ea0777461e49fb0d77cb39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:41:05.600115   43550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 17:41:05.608192   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:05.608257   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:05.617282   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:06.117367   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:06.117447   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:06.127634   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:06.618158   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:06.618364   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:06.629755   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:07.119416   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:07.119611   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:07.131032   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:07.618003   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:07.618220   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:07.629173   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:08.117473   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:08.117679   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:08.128717   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:08.619437   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:08.619576   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:08.629604   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:09.119412   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:09.119658   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:09.131156   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:09.617634   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:09.617745   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:09.628344   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:10.117604   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:10.117755   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:10.129085   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:10.619156   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:10.619371   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:10.631013   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:11.119521   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:11.119781   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:11.130801   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:11.619550   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:11.619712   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:11.630953   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:12.119716   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:12.119819   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:12.131100   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:12.617558   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:12.617687   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:12.627613   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:13.118339   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:13.118461   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:13.128174   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:13.617607   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:13.617707   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:13.628693   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:14.119528   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:14.119601   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:14.129234   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:14.618385   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:14.618510   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:14.628375   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:15.117616   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:15.117687   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:15.127794   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:15.617584   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:15.617663   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:15.627940   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:15.627954   43550 api_server.go:165] Checking apiserver status ...
	I0223 17:41:15.628017   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:41:15.638116   43550 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:41:15.638131   43550 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 17:41:15.638140   43550 kubeadm.go:1120] stopping kube-system containers ...
	I0223 17:41:15.638215   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:41:15.658474   43550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 17:41:15.670963   43550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:41:15.680027   43550 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 24 01:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 24 01:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Feb 24 01:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Feb 24 01:37 /etc/kubernetes/scheduler.conf
	
	I0223 17:41:15.680134   43550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 17:41:15.689611   43550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 17:41:15.698906   43550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 17:41:15.707563   43550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 17:41:15.715895   43550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 17:41:15.724907   43550 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 17:41:15.724920   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:41:15.781851   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:41:16.348076   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:41:16.519630   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:41:16.590679   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:41:16.665796   43550 api_server.go:51] waiting for apiserver process to appear ...
	I0223 17:41:16.665871   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:17.176067   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:17.676833   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:18.175979   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:18.676057   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:19.176686   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:19.676917   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:20.176298   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:20.677922   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:21.176364   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:21.676932   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:22.176291   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:22.676892   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:23.176163   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:23.676185   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:24.176202   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:24.676379   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:25.176253   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:25.676461   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:26.177189   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:26.676444   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:27.176515   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:27.676321   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:28.176913   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:28.676224   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:29.177503   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:29.676302   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:30.176280   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:30.677553   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:31.176974   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:31.676405   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:32.178409   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:32.677204   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:33.177271   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:33.677791   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:34.177540   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:34.676823   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:35.177658   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:35.678534   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:36.177878   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:36.676942   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:37.176434   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:37.676536   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:38.177801   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:38.678305   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:39.176878   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:39.676512   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:40.176543   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:40.677053   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:41.176603   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:41.676891   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:42.177319   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:42.676588   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:43.176604   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:43.676563   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:44.176638   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:44.676675   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:45.177465   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:45.677209   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:46.176821   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:46.677064   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:47.176621   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:47.677348   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:48.176822   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:48.676748   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:49.176722   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:49.677258   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:50.176850   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:50.677251   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:51.177575   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:51.676882   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:52.176782   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:52.678358   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:53.176822   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:53.677288   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:54.177192   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:54.677984   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:55.177678   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:55.678955   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:56.177495   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:56.677465   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:57.177877   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:57.678613   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:58.177530   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:58.677054   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:59.176938   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:41:59.677021   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:00.177303   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:00.678781   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:01.177755   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:01.677827   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:02.177050   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:02.677062   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:03.177352   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:03.678178   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:04.177564   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:04.678032   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:05.177285   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:05.677912   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:06.177269   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:06.677124   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:07.177883   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:07.677314   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:08.177835   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:08.677567   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:09.177748   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:09.677367   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:10.177176   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:10.677815   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:11.177731   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:11.677491   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:12.177314   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:12.677679   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:13.177547   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:13.679455   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:14.177563   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:14.678197   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:15.179402   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:15.677816   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:16.177406   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:16.677879   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:42:16.697540   43550 logs.go:277] 0 containers: []
	W0223 17:42:16.697553   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:42:16.697623   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:42:16.716041   43550 logs.go:277] 0 containers: []
	W0223 17:42:16.716055   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:42:16.716126   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:42:16.735884   43550 logs.go:277] 0 containers: []
	W0223 17:42:16.735897   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:42:16.735974   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:42:16.755996   43550 logs.go:277] 0 containers: []
	W0223 17:42:16.756008   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:42:16.756080   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:42:16.776735   43550 logs.go:277] 0 containers: []
	W0223 17:42:16.776748   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:42:16.776819   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:42:16.796846   43550 logs.go:277] 0 containers: []
	W0223 17:42:16.796859   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:42:16.796929   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:42:16.816094   43550 logs.go:277] 0 containers: []
	W0223 17:42:16.816107   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:42:16.816202   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:42:16.835785   43550 logs.go:277] 0 containers: []
	W0223 17:42:16.835799   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:42:16.835807   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:42:16.835817   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:42:16.848125   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:42:16.848143   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:42:16.930022   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:42:16.930038   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:42:16.930047   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:42:16.951730   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:42:16.951749   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:42:18.997506   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045699614s)
	I0223 17:42:18.997695   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:42:18.997704   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:42:21.537189   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:21.678571   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:42:21.700808   43550 logs.go:277] 0 containers: []
	W0223 17:42:21.700826   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:42:21.700910   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:42:21.719748   43550 logs.go:277] 0 containers: []
	W0223 17:42:21.719760   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:42:21.719843   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:42:21.738691   43550 logs.go:277] 0 containers: []
	W0223 17:42:21.738705   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:42:21.738775   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:42:21.758661   43550 logs.go:277] 0 containers: []
	W0223 17:42:21.758675   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:42:21.758745   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:42:21.778853   43550 logs.go:277] 0 containers: []
	W0223 17:42:21.778866   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:42:21.778945   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:42:21.798642   43550 logs.go:277] 0 containers: []
	W0223 17:42:21.798655   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:42:21.798726   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:42:21.818666   43550 logs.go:277] 0 containers: []
	W0223 17:42:21.818678   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:42:21.818755   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:42:21.838908   43550 logs.go:277] 0 containers: []
	W0223 17:42:21.838920   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:42:21.838927   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:42:21.838935   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:42:21.878322   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:42:21.878338   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:42:21.891434   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:42:21.891449   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:42:21.946977   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:42:21.946996   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:42:21.947003   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:42:21.968209   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:42:21.968223   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:42:24.015542   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047260497s)
	I0223 17:42:26.516383   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:26.677605   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:42:26.697276   43550 logs.go:277] 0 containers: []
	W0223 17:42:26.697291   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:42:26.697361   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:42:26.716897   43550 logs.go:277] 0 containers: []
	W0223 17:42:26.716912   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:42:26.717002   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:42:26.736076   43550 logs.go:277] 0 containers: []
	W0223 17:42:26.736090   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:42:26.736164   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:42:26.755549   43550 logs.go:277] 0 containers: []
	W0223 17:42:26.755563   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:42:26.755633   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:42:26.775318   43550 logs.go:277] 0 containers: []
	W0223 17:42:26.775333   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:42:26.775406   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:42:26.795041   43550 logs.go:277] 0 containers: []
	W0223 17:42:26.795053   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:42:26.795128   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:42:26.814580   43550 logs.go:277] 0 containers: []
	W0223 17:42:26.814598   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:42:26.814688   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:42:26.836110   43550 logs.go:277] 0 containers: []
	W0223 17:42:26.836124   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:42:26.836131   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:42:26.836138   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:42:26.874052   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:42:26.874070   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:42:26.886967   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:42:26.886982   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:42:26.942193   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:42:26.942208   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:42:26.942215   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:42:26.963844   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:42:26.963859   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:42:29.009557   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04564054s)
	I0223 17:42:31.509931   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:31.678453   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:42:31.699866   43550 logs.go:277] 0 containers: []
	W0223 17:42:31.699879   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:42:31.699950   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:42:31.719554   43550 logs.go:277] 0 containers: []
	W0223 17:42:31.719569   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:42:31.719641   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:42:31.739760   43550 logs.go:277] 0 containers: []
	W0223 17:42:31.739775   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:42:31.739847   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:42:31.759386   43550 logs.go:277] 0 containers: []
	W0223 17:42:31.759399   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:42:31.759470   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:42:31.779523   43550 logs.go:277] 0 containers: []
	W0223 17:42:31.779537   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:42:31.779608   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:42:31.799527   43550 logs.go:277] 0 containers: []
	W0223 17:42:31.799540   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:42:31.799612   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:42:31.819771   43550 logs.go:277] 0 containers: []
	W0223 17:42:31.819786   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:42:31.819863   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:42:31.840680   43550 logs.go:277] 0 containers: []
	W0223 17:42:31.840694   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:42:31.840704   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:42:31.840718   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:42:31.880758   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:42:31.880778   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:42:31.893645   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:42:31.893673   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:42:31.954759   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:42:31.954769   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:42:31.954781   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:42:31.976516   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:42:31.976531   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:42:34.024215   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047627653s)
	I0223 17:42:36.525698   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:36.679975   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:42:36.701540   43550 logs.go:277] 0 containers: []
	W0223 17:42:36.701554   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:42:36.701629   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:42:36.720122   43550 logs.go:277] 0 containers: []
	W0223 17:42:36.720138   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:42:36.720213   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:42:36.739152   43550 logs.go:277] 0 containers: []
	W0223 17:42:36.739165   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:42:36.739233   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:42:36.758179   43550 logs.go:277] 0 containers: []
	W0223 17:42:36.758193   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:42:36.758271   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:42:36.777589   43550 logs.go:277] 0 containers: []
	W0223 17:42:36.777603   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:42:36.777676   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:42:36.796217   43550 logs.go:277] 0 containers: []
	W0223 17:42:36.796230   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:42:36.796301   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:42:36.816401   43550 logs.go:277] 0 containers: []
	W0223 17:42:36.816416   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:42:36.816498   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:42:36.835175   43550 logs.go:277] 0 containers: []
	W0223 17:42:36.835194   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:42:36.835202   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:42:36.835211   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:42:36.874761   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:42:36.874775   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:42:36.887933   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:42:36.887970   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:42:36.944215   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:42:36.944227   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:42:36.944234   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:42:36.965283   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:42:36.965299   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:42:39.012131   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046775383s)
	I0223 17:42:41.512444   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:41.678327   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:42:41.699352   43550 logs.go:277] 0 containers: []
	W0223 17:42:41.699365   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:42:41.699439   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:42:41.719545   43550 logs.go:277] 0 containers: []
	W0223 17:42:41.719558   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:42:41.719630   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:42:41.738790   43550 logs.go:277] 0 containers: []
	W0223 17:42:41.738803   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:42:41.738880   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:42:41.758982   43550 logs.go:277] 0 containers: []
	W0223 17:42:41.758996   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:42:41.759066   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:42:41.778980   43550 logs.go:277] 0 containers: []
	W0223 17:42:41.778994   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:42:41.779063   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:42:41.798260   43550 logs.go:277] 0 containers: []
	W0223 17:42:41.798277   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:42:41.798358   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:42:41.818533   43550 logs.go:277] 0 containers: []
	W0223 17:42:41.818542   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:42:41.818614   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:42:41.837480   43550 logs.go:277] 0 containers: []
	W0223 17:42:41.837494   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:42:41.837501   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:42:41.837508   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:42:43.883784   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046219038s)
	I0223 17:42:43.883900   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:42:43.883908   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:42:43.923596   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:42:43.923617   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:42:43.937525   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:42:43.937542   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:42:43.992580   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:42:43.992591   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:42:43.992606   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:42:46.514544   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:46.680172   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:42:46.701489   43550 logs.go:277] 0 containers: []
	W0223 17:42:46.701505   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:42:46.701575   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:42:46.721513   43550 logs.go:277] 0 containers: []
	W0223 17:42:46.721525   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:42:46.721595   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:42:46.742080   43550 logs.go:277] 0 containers: []
	W0223 17:42:46.742096   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:42:46.742188   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:42:46.761043   43550 logs.go:277] 0 containers: []
	W0223 17:42:46.761057   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:42:46.761127   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:42:46.780341   43550 logs.go:277] 0 containers: []
	W0223 17:42:46.780358   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:42:46.780429   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:42:46.798797   43550 logs.go:277] 0 containers: []
	W0223 17:42:46.798810   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:42:46.798882   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:42:46.819318   43550 logs.go:277] 0 containers: []
	W0223 17:42:46.819339   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:42:46.819419   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:42:46.838652   43550 logs.go:277] 0 containers: []
	W0223 17:42:46.838665   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:42:46.838673   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:42:46.838680   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:42:46.877825   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:42:46.877843   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:42:46.890678   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:42:46.890701   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:42:46.956167   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:42:46.956179   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:42:46.956186   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:42:46.977119   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:42:46.977134   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:42:49.023765   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046574574s)
	I0223 17:42:51.524360   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:51.680297   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:42:51.702257   43550 logs.go:277] 0 containers: []
	W0223 17:42:51.702270   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:42:51.702342   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:42:51.722300   43550 logs.go:277] 0 containers: []
	W0223 17:42:51.722313   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:42:51.722388   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:42:51.741116   43550 logs.go:277] 0 containers: []
	W0223 17:42:51.741130   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:42:51.741213   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:42:51.760286   43550 logs.go:277] 0 containers: []
	W0223 17:42:51.760300   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:42:51.760369   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:42:51.779839   43550 logs.go:277] 0 containers: []
	W0223 17:42:51.779852   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:42:51.779923   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:42:51.798661   43550 logs.go:277] 0 containers: []
	W0223 17:42:51.798674   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:42:51.798753   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:42:51.818761   43550 logs.go:277] 0 containers: []
	W0223 17:42:51.818775   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:42:51.818853   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:42:51.837974   43550 logs.go:277] 0 containers: []
	W0223 17:42:51.837987   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:42:51.837994   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:42:51.838001   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:42:51.849975   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:42:51.849990   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:42:51.905251   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:42:51.905266   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:42:51.905275   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:42:51.926108   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:42:51.926123   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:42:53.973237   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047056577s)
	I0223 17:42:53.973352   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:42:53.973359   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:42:56.511823   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:42:56.678849   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:42:56.699158   43550 logs.go:277] 0 containers: []
	W0223 17:42:56.699172   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:42:56.699251   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:42:56.717678   43550 logs.go:277] 0 containers: []
	W0223 17:42:56.717692   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:42:56.717765   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:42:56.736880   43550 logs.go:277] 0 containers: []
	W0223 17:42:56.736894   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:42:56.736975   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:42:56.757102   43550 logs.go:277] 0 containers: []
	W0223 17:42:56.757114   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:42:56.757188   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:42:56.776287   43550 logs.go:277] 0 containers: []
	W0223 17:42:56.776300   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:42:56.776373   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:42:56.795561   43550 logs.go:277] 0 containers: []
	W0223 17:42:56.795575   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:42:56.795675   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:42:56.815504   43550 logs.go:277] 0 containers: []
	W0223 17:42:56.815518   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:42:56.815597   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:42:56.835332   43550 logs.go:277] 0 containers: []
	W0223 17:42:56.835345   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:42:56.835353   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:42:56.835360   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:42:56.874017   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:42:56.874032   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:42:56.886740   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:42:56.886755   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:42:56.942147   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:42:56.942160   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:42:56.942166   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:42:56.963614   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:42:56.963627   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:42:59.010130   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046445623s)
	I0223 17:43:01.510447   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:01.679249   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:01.700287   43550 logs.go:277] 0 containers: []
	W0223 17:43:01.700302   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:01.700377   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:01.719323   43550 logs.go:277] 0 containers: []
	W0223 17:43:01.719337   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:01.719420   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:01.738388   43550 logs.go:277] 0 containers: []
	W0223 17:43:01.738401   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:01.738478   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:01.757896   43550 logs.go:277] 0 containers: []
	W0223 17:43:01.757909   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:01.757982   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:01.777163   43550 logs.go:277] 0 containers: []
	W0223 17:43:01.777175   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:01.777244   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:01.797147   43550 logs.go:277] 0 containers: []
	W0223 17:43:01.797160   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:01.797232   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:01.816422   43550 logs.go:277] 0 containers: []
	W0223 17:43:01.816434   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:01.816514   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:01.836195   43550 logs.go:277] 0 containers: []
	W0223 17:43:01.836207   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:01.836214   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:01.836231   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:03.883594   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047305262s)
	I0223 17:43:03.883709   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:03.883716   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:03.922231   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:03.922249   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:03.935358   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:03.935373   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:03.990617   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:03.990629   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:03.990636   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:06.519244   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:06.680559   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:06.702272   43550 logs.go:277] 0 containers: []
	W0223 17:43:06.702286   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:06.702359   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:06.722109   43550 logs.go:277] 0 containers: []
	W0223 17:43:06.722123   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:06.722200   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:06.742221   43550 logs.go:277] 0 containers: []
	W0223 17:43:06.742235   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:06.742305   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:06.761634   43550 logs.go:277] 0 containers: []
	W0223 17:43:06.761648   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:06.761720   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:06.780427   43550 logs.go:277] 0 containers: []
	W0223 17:43:06.780440   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:06.780511   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:06.800338   43550 logs.go:277] 0 containers: []
	W0223 17:43:06.800351   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:06.800427   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:06.819991   43550 logs.go:277] 0 containers: []
	W0223 17:43:06.820004   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:06.820079   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:06.840949   43550 logs.go:277] 0 containers: []
	W0223 17:43:06.840962   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:06.840969   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:06.840976   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:06.882175   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:06.882192   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:06.895622   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:06.895637   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:06.950745   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:06.950759   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:06.950768   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:06.971914   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:06.971927   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:09.019596   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047611691s)
	I0223 17:43:11.522038   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:11.679195   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:11.698815   43550 logs.go:277] 0 containers: []
	W0223 17:43:11.698828   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:11.698919   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:11.718314   43550 logs.go:277] 0 containers: []
	W0223 17:43:11.718326   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:11.718394   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:11.738306   43550 logs.go:277] 0 containers: []
	W0223 17:43:11.738321   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:11.738392   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:11.758091   43550 logs.go:277] 0 containers: []
	W0223 17:43:11.758106   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:11.758182   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:11.776729   43550 logs.go:277] 0 containers: []
	W0223 17:43:11.776742   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:11.776814   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:11.795876   43550 logs.go:277] 0 containers: []
	W0223 17:43:11.795889   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:11.795961   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:11.814904   43550 logs.go:277] 0 containers: []
	W0223 17:43:11.814918   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:11.814996   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:11.835615   43550 logs.go:277] 0 containers: []
	W0223 17:43:11.835628   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:11.835636   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:11.835644   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:11.875539   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:11.875556   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:11.887925   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:11.887944   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:11.943577   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:11.943588   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:11.943594   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:11.965467   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:11.965483   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:14.012471   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046931163s)
	I0223 17:43:16.512782   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:16.678811   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:16.699973   43550 logs.go:277] 0 containers: []
	W0223 17:43:16.699990   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:16.700067   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:16.719655   43550 logs.go:277] 0 containers: []
	W0223 17:43:16.719669   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:16.719742   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:16.738900   43550 logs.go:277] 0 containers: []
	W0223 17:43:16.738913   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:16.738982   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:16.758108   43550 logs.go:277] 0 containers: []
	W0223 17:43:16.758121   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:16.758195   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:16.778378   43550 logs.go:277] 0 containers: []
	W0223 17:43:16.778396   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:16.778469   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:16.797604   43550 logs.go:277] 0 containers: []
	W0223 17:43:16.797618   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:16.797691   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:16.817127   43550 logs.go:277] 0 containers: []
	W0223 17:43:16.817142   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:16.817225   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:16.836827   43550 logs.go:277] 0 containers: []
	W0223 17:43:16.836840   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:16.836848   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:16.836856   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:16.892651   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:16.892665   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:16.892673   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:16.914199   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:16.914219   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:18.973902   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059625568s)
	I0223 17:43:18.974019   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:18.974028   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:19.011434   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:19.011450   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:21.524570   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:21.680147   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:21.701395   43550 logs.go:277] 0 containers: []
	W0223 17:43:21.701410   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:21.701495   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:21.721290   43550 logs.go:277] 0 containers: []
	W0223 17:43:21.721303   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:21.721375   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:21.741570   43550 logs.go:277] 0 containers: []
	W0223 17:43:21.741583   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:21.741687   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:21.761586   43550 logs.go:277] 0 containers: []
	W0223 17:43:21.761598   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:21.761670   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:21.781329   43550 logs.go:277] 0 containers: []
	W0223 17:43:21.781342   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:21.781415   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:21.800337   43550 logs.go:277] 0 containers: []
	W0223 17:43:21.800351   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:21.800421   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:21.819939   43550 logs.go:277] 0 containers: []
	W0223 17:43:21.819953   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:21.820028   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:21.839998   43550 logs.go:277] 0 containers: []
	W0223 17:43:21.840011   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:21.840019   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:21.840027   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:21.877934   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:21.877947   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:21.891222   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:21.891237   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:21.947018   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:21.947030   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:21.947037   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:21.967790   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:21.967804   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:24.016532   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048671114s)
	I0223 17:43:26.516861   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:26.679499   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:26.699647   43550 logs.go:277] 0 containers: []
	W0223 17:43:26.699660   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:26.699729   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:26.719417   43550 logs.go:277] 0 containers: []
	W0223 17:43:26.719430   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:26.719499   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:26.738951   43550 logs.go:277] 0 containers: []
	W0223 17:43:26.738965   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:26.739034   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:26.757912   43550 logs.go:277] 0 containers: []
	W0223 17:43:26.757926   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:26.758004   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:26.777754   43550 logs.go:277] 0 containers: []
	W0223 17:43:26.777769   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:26.777841   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:26.796757   43550 logs.go:277] 0 containers: []
	W0223 17:43:26.796770   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:26.796841   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:26.816496   43550 logs.go:277] 0 containers: []
	W0223 17:43:26.816512   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:26.816595   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:26.836405   43550 logs.go:277] 0 containers: []
	W0223 17:43:26.836418   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:26.836425   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:26.836433   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:26.857721   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:26.857739   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:28.903761   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045965861s)
	I0223 17:43:28.903888   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:28.903897   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:28.942941   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:28.942961   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:28.955687   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:28.955701   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:29.009946   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:31.511534   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:31.681109   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:31.702964   43550 logs.go:277] 0 containers: []
	W0223 17:43:31.702977   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:31.703047   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:31.723175   43550 logs.go:277] 0 containers: []
	W0223 17:43:31.723188   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:31.723260   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:31.742885   43550 logs.go:277] 0 containers: []
	W0223 17:43:31.742900   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:31.742972   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:31.762638   43550 logs.go:277] 0 containers: []
	W0223 17:43:31.762652   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:31.762724   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:31.781197   43550 logs.go:277] 0 containers: []
	W0223 17:43:31.781210   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:31.781281   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:31.800447   43550 logs.go:277] 0 containers: []
	W0223 17:43:31.800459   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:31.800529   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:31.820828   43550 logs.go:277] 0 containers: []
	W0223 17:43:31.820843   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:31.820920   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:31.841084   43550 logs.go:277] 0 containers: []
	W0223 17:43:31.841098   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:31.841107   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:31.841115   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:31.899245   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:31.899257   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:31.899265   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:31.949673   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:31.949689   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:33.997205   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047458377s)
	I0223 17:43:33.997318   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:33.997326   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:34.034377   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:34.034393   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:36.547335   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:36.681293   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:36.702896   43550 logs.go:277] 0 containers: []
	W0223 17:43:36.702910   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:36.702984   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:36.722206   43550 logs.go:277] 0 containers: []
	W0223 17:43:36.722222   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:36.722292   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:36.742002   43550 logs.go:277] 0 containers: []
	W0223 17:43:36.742017   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:36.742094   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:36.761752   43550 logs.go:277] 0 containers: []
	W0223 17:43:36.761766   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:36.761837   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:36.780186   43550 logs.go:277] 0 containers: []
	W0223 17:43:36.780199   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:36.780268   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:36.799576   43550 logs.go:277] 0 containers: []
	W0223 17:43:36.799589   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:36.799660   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:36.819523   43550 logs.go:277] 0 containers: []
	W0223 17:43:36.819533   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:36.819604   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:36.839069   43550 logs.go:277] 0 containers: []
	W0223 17:43:36.839082   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:36.839089   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:36.839096   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:36.876090   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:36.876103   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:36.889196   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:36.889210   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:36.944086   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:36.944096   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:36.944103   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:36.965002   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:36.965016   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:39.012056   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046982989s)
	I0223 17:43:41.513111   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:41.679978   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:41.699885   43550 logs.go:277] 0 containers: []
	W0223 17:43:41.699899   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:41.699971   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:41.718891   43550 logs.go:277] 0 containers: []
	W0223 17:43:41.718904   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:41.718974   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:41.739275   43550 logs.go:277] 0 containers: []
	W0223 17:43:41.739287   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:41.739356   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:41.758714   43550 logs.go:277] 0 containers: []
	W0223 17:43:41.758729   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:41.758805   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:41.778689   43550 logs.go:277] 0 containers: []
	W0223 17:43:41.778702   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:41.778771   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:41.798160   43550 logs.go:277] 0 containers: []
	W0223 17:43:41.798173   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:41.798256   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:41.818064   43550 logs.go:277] 0 containers: []
	W0223 17:43:41.818102   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:41.818201   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:41.836371   43550 logs.go:277] 0 containers: []
	W0223 17:43:41.836385   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:41.836392   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:41.836399   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:41.857779   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:41.857793   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:43.903003   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045152675s)
	I0223 17:43:43.903113   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:43.903120   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:43.941685   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:43.941705   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:43.953858   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:43.953875   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:44.009263   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:46.509958   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:46.679590   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:46.700704   43550 logs.go:277] 0 containers: []
	W0223 17:43:46.700717   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:46.700786   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:46.719756   43550 logs.go:277] 0 containers: []
	W0223 17:43:46.719768   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:46.719841   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:46.739081   43550 logs.go:277] 0 containers: []
	W0223 17:43:46.739096   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:46.739167   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:46.757950   43550 logs.go:277] 0 containers: []
	W0223 17:43:46.757963   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:46.758039   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:46.776867   43550 logs.go:277] 0 containers: []
	W0223 17:43:46.776880   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:46.776952   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:46.796516   43550 logs.go:277] 0 containers: []
	W0223 17:43:46.796532   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:46.796606   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:46.816133   43550 logs.go:277] 0 containers: []
	W0223 17:43:46.816147   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:46.816220   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:46.835065   43550 logs.go:277] 0 containers: []
	W0223 17:43:46.835078   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:46.835086   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:46.835096   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:46.875282   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:46.875301   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:46.888188   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:46.888209   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:46.950223   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:46.950236   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:46.950245   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:46.971148   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:46.971162   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:49.019499   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048280118s)
	I0223 17:43:51.520242   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:51.679538   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:51.701372   43550 logs.go:277] 0 containers: []
	W0223 17:43:51.701386   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:51.701458   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:51.721880   43550 logs.go:277] 0 containers: []
	W0223 17:43:51.721892   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:51.721960   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:51.740930   43550 logs.go:277] 0 containers: []
	W0223 17:43:51.740943   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:51.741013   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:51.760301   43550 logs.go:277] 0 containers: []
	W0223 17:43:51.760316   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:51.760399   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:51.779953   43550 logs.go:277] 0 containers: []
	W0223 17:43:51.779965   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:51.780040   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:51.799356   43550 logs.go:277] 0 containers: []
	W0223 17:43:51.799372   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:51.799457   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:51.819849   43550 logs.go:277] 0 containers: []
	W0223 17:43:51.819862   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:51.819939   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:51.840641   43550 logs.go:277] 0 containers: []
	W0223 17:43:51.840656   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:51.840664   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:51.840671   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:51.879317   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:51.879331   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:51.892514   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:51.892529   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:51.946321   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:51.946331   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:51.946340   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:51.967175   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:51.967188   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:54.014732   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047486943s)
	I0223 17:43:56.517156   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:43:56.679837   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:43:56.701083   43550 logs.go:277] 0 containers: []
	W0223 17:43:56.701097   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:43:56.701172   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:43:56.721650   43550 logs.go:277] 0 containers: []
	W0223 17:43:56.721664   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:43:56.721735   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:43:56.741188   43550 logs.go:277] 0 containers: []
	W0223 17:43:56.741206   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:43:56.741278   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:43:56.760130   43550 logs.go:277] 0 containers: []
	W0223 17:43:56.760143   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:43:56.760213   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:43:56.778819   43550 logs.go:277] 0 containers: []
	W0223 17:43:56.778832   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:43:56.778910   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:43:56.797996   43550 logs.go:277] 0 containers: []
	W0223 17:43:56.798010   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:43:56.798081   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:43:56.817153   43550 logs.go:277] 0 containers: []
	W0223 17:43:56.817168   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:43:56.817253   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:43:56.836962   43550 logs.go:277] 0 containers: []
	W0223 17:43:56.836977   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:43:56.836985   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:43:56.836992   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:43:56.878872   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:43:56.878891   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:43:56.892036   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:43:56.892050   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:43:56.946816   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:43:56.946829   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:43:56.946836   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:43:56.967788   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:43:56.967802   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:43:59.013655   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045796007s)
	I0223 17:44:01.515746   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:01.680468   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:01.700567   43550 logs.go:277] 0 containers: []
	W0223 17:44:01.700581   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:01.700655   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:01.719050   43550 logs.go:277] 0 containers: []
	W0223 17:44:01.719064   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:01.719136   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:01.738711   43550 logs.go:277] 0 containers: []
	W0223 17:44:01.738724   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:01.738795   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:01.757418   43550 logs.go:277] 0 containers: []
	W0223 17:44:01.757430   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:01.757502   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:01.776542   43550 logs.go:277] 0 containers: []
	W0223 17:44:01.776557   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:01.776631   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:01.795336   43550 logs.go:277] 0 containers: []
	W0223 17:44:01.795349   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:01.795420   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:01.814257   43550 logs.go:277] 0 containers: []
	W0223 17:44:01.814269   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:01.814340   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:01.834538   43550 logs.go:277] 0 containers: []
	W0223 17:44:01.834551   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:01.834559   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:01.834566   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:03.881608   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046981686s)
	I0223 17:44:03.881737   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:03.881745   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:03.920386   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:03.920401   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:03.932949   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:03.932962   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:03.987141   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:03.987162   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:03.987169   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:06.508771   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:06.680929   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:06.701840   43550 logs.go:277] 0 containers: []
	W0223 17:44:06.701855   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:06.701925   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:06.721395   43550 logs.go:277] 0 containers: []
	W0223 17:44:06.721410   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:06.721482   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:06.741002   43550 logs.go:277] 0 containers: []
	W0223 17:44:06.741017   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:06.741111   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:06.761551   43550 logs.go:277] 0 containers: []
	W0223 17:44:06.761565   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:06.761635   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:06.781204   43550 logs.go:277] 0 containers: []
	W0223 17:44:06.781220   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:06.781298   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:06.800359   43550 logs.go:277] 0 containers: []
	W0223 17:44:06.800371   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:06.800439   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:06.820226   43550 logs.go:277] 0 containers: []
	W0223 17:44:06.820236   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:06.820312   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:06.839373   43550 logs.go:277] 0 containers: []
	W0223 17:44:06.839386   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:06.839394   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:06.839401   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:06.860357   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:06.860373   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:08.904319   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043888974s)
	I0223 17:44:08.904429   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:08.904437   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:08.944223   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:08.944240   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:08.957467   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:08.957480   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:09.013723   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:11.514640   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:11.680468   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:11.701100   43550 logs.go:277] 0 containers: []
	W0223 17:44:11.701113   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:11.701182   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:11.720092   43550 logs.go:277] 0 containers: []
	W0223 17:44:11.720105   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:11.720178   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:11.739409   43550 logs.go:277] 0 containers: []
	W0223 17:44:11.739423   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:11.739493   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:11.758803   43550 logs.go:277] 0 containers: []
	W0223 17:44:11.758818   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:11.758888   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:11.777827   43550 logs.go:277] 0 containers: []
	W0223 17:44:11.777842   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:11.777914   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:11.797868   43550 logs.go:277] 0 containers: []
	W0223 17:44:11.797882   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:11.797954   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:11.818476   43550 logs.go:277] 0 containers: []
	W0223 17:44:11.818492   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:11.818578   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:11.838786   43550 logs.go:277] 0 containers: []
	W0223 17:44:11.838800   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:11.838807   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:11.838815   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:11.877115   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:11.877130   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:11.889802   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:11.889817   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:11.946941   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:11.946953   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:11.946960   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:11.967614   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:11.967628   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:14.013401   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04571704s)
	I0223 17:44:16.514573   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:16.681462   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:16.702550   43550 logs.go:277] 0 containers: []
	W0223 17:44:16.702563   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:16.702635   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:16.723270   43550 logs.go:277] 0 containers: []
	W0223 17:44:16.723284   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:16.723378   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:16.742360   43550 logs.go:277] 0 containers: []
	W0223 17:44:16.742374   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:16.742445   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:16.761925   43550 logs.go:277] 0 containers: []
	W0223 17:44:16.761939   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:16.762010   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:16.780674   43550 logs.go:277] 0 containers: []
	W0223 17:44:16.780689   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:16.780759   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:16.799981   43550 logs.go:277] 0 containers: []
	W0223 17:44:16.799995   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:16.800067   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:16.820076   43550 logs.go:277] 0 containers: []
	W0223 17:44:16.820088   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:16.820161   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:16.840069   43550 logs.go:277] 0 containers: []
	W0223 17:44:16.840083   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:16.840091   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:16.840098   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:16.878181   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:16.878199   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:16.891307   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:16.891332   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:16.960082   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:16.960096   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:16.960106   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:16.982113   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:16.982129   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:19.029767   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047580882s)
	I0223 17:44:21.530219   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:21.680115   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:21.700061   43550 logs.go:277] 0 containers: []
	W0223 17:44:21.700073   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:21.700145   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:21.719937   43550 logs.go:277] 0 containers: []
	W0223 17:44:21.719952   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:21.720016   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:21.740209   43550 logs.go:277] 0 containers: []
	W0223 17:44:21.740222   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:21.740292   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:21.760166   43550 logs.go:277] 0 containers: []
	W0223 17:44:21.760180   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:21.760254   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:21.779491   43550 logs.go:277] 0 containers: []
	W0223 17:44:21.779504   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:21.779572   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:21.799264   43550 logs.go:277] 0 containers: []
	W0223 17:44:21.799277   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:21.799345   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:21.818901   43550 logs.go:277] 0 containers: []
	W0223 17:44:21.818918   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:21.818990   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:21.839194   43550 logs.go:277] 0 containers: []
	W0223 17:44:21.839209   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:21.839217   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:21.839225   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:23.887168   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04788522s)
	I0223 17:44:23.887276   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:23.887283   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:23.924248   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:23.924262   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:23.936722   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:23.936735   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:23.991597   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:23.991610   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:23.991617   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:26.514759   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:26.680314   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:26.699775   43550 logs.go:277] 0 containers: []
	W0223 17:44:26.699788   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:26.699863   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:26.719624   43550 logs.go:277] 0 containers: []
	W0223 17:44:26.719638   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:26.719715   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:26.739033   43550 logs.go:277] 0 containers: []
	W0223 17:44:26.739051   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:26.739122   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:26.758980   43550 logs.go:277] 0 containers: []
	W0223 17:44:26.758993   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:26.759063   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:26.778071   43550 logs.go:277] 0 containers: []
	W0223 17:44:26.778084   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:26.778156   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:26.797112   43550 logs.go:277] 0 containers: []
	W0223 17:44:26.797126   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:26.797197   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:26.817348   43550 logs.go:277] 0 containers: []
	W0223 17:44:26.817366   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:26.817451   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:26.838636   43550 logs.go:277] 0 containers: []
	W0223 17:44:26.838650   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:26.838658   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:26.838666   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:26.878867   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:26.878883   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:26.892134   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:26.892149   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:26.947646   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:26.947659   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:26.947665   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:26.969493   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:26.969511   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:29.015550   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045980375s)
	I0223 17:44:31.518033   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:31.682431   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:31.704121   43550 logs.go:277] 0 containers: []
	W0223 17:44:31.704136   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:31.704208   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:31.723261   43550 logs.go:277] 0 containers: []
	W0223 17:44:31.723275   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:31.723346   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:31.742656   43550 logs.go:277] 0 containers: []
	W0223 17:44:31.742671   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:31.742744   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:31.762184   43550 logs.go:277] 0 containers: []
	W0223 17:44:31.762197   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:31.762268   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:31.781382   43550 logs.go:277] 0 containers: []
	W0223 17:44:31.781396   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:31.781467   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:31.801308   43550 logs.go:277] 0 containers: []
	W0223 17:44:31.801323   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:31.801395   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:31.821443   43550 logs.go:277] 0 containers: []
	W0223 17:44:31.821455   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:31.821526   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:31.841368   43550 logs.go:277] 0 containers: []
	W0223 17:44:31.841382   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:31.841390   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:31.841397   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:31.853499   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:31.853513   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:31.935348   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:31.935360   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:31.935367   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:31.956911   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:31.956930   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:34.002913   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045926041s)
	I0223 17:44:34.003032   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:34.003040   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:36.540098   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:36.681232   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:36.703638   43550 logs.go:277] 0 containers: []
	W0223 17:44:36.703653   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:36.703728   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:36.723631   43550 logs.go:277] 0 containers: []
	W0223 17:44:36.723644   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:36.723712   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:36.743282   43550 logs.go:277] 0 containers: []
	W0223 17:44:36.743295   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:36.743365   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:36.762728   43550 logs.go:277] 0 containers: []
	W0223 17:44:36.762740   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:36.762815   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:36.783014   43550 logs.go:277] 0 containers: []
	W0223 17:44:36.783028   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:36.783100   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:36.801451   43550 logs.go:277] 0 containers: []
	W0223 17:44:36.801465   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:36.801536   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:36.822709   43550 logs.go:277] 0 containers: []
	W0223 17:44:36.822723   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:36.822798   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:36.843597   43550 logs.go:277] 0 containers: []
	W0223 17:44:36.843610   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:36.843617   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:36.843625   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:38.888565   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044883922s)
	I0223 17:44:38.888677   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:38.888684   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:38.926506   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:38.926521   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:38.938671   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:38.938684   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:38.993256   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:38.993268   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:38.993279   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:41.516015   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:41.681301   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:41.701199   43550 logs.go:277] 0 containers: []
	W0223 17:44:41.701214   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:41.701287   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:41.721013   43550 logs.go:277] 0 containers: []
	W0223 17:44:41.721026   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:41.721097   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:41.740939   43550 logs.go:277] 0 containers: []
	W0223 17:44:41.740951   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:41.741025   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:41.761786   43550 logs.go:277] 0 containers: []
	W0223 17:44:41.761799   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:41.761871   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:41.780528   43550 logs.go:277] 0 containers: []
	W0223 17:44:41.780544   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:41.780615   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:41.799820   43550 logs.go:277] 0 containers: []
	W0223 17:44:41.799839   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:41.799918   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:41.819786   43550 logs.go:277] 0 containers: []
	W0223 17:44:41.819806   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:41.819895   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:41.839371   43550 logs.go:277] 0 containers: []
	W0223 17:44:41.839384   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:41.839391   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:41.839398   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:41.851399   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:41.851411   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:41.906418   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:41.906430   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:41.906437   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:41.927732   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:41.927746   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:43.973857   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046054009s)
	I0223 17:44:43.973986   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:43.973995   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:46.513954   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:46.681315   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:46.703226   43550 logs.go:277] 0 containers: []
	W0223 17:44:46.703241   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:46.703312   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:46.722303   43550 logs.go:277] 0 containers: []
	W0223 17:44:46.722318   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:46.722390   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:46.741241   43550 logs.go:277] 0 containers: []
	W0223 17:44:46.741255   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:46.741328   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:46.760366   43550 logs.go:277] 0 containers: []
	W0223 17:44:46.760379   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:46.760449   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:46.779651   43550 logs.go:277] 0 containers: []
	W0223 17:44:46.779666   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:46.779741   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:46.799298   43550 logs.go:277] 0 containers: []
	W0223 17:44:46.799312   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:46.799387   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:46.820026   43550 logs.go:277] 0 containers: []
	W0223 17:44:46.820039   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:46.820114   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:46.838853   43550 logs.go:277] 0 containers: []
	W0223 17:44:46.838870   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:46.838880   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:46.838888   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:46.880075   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:46.880096   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:46.893957   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:46.893988   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:46.957789   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:46.957801   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:46.957809   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:46.978712   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:46.978727   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:49.024629   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045845497s)
	I0223 17:44:51.525427   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:51.681453   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:51.702299   43550 logs.go:277] 0 containers: []
	W0223 17:44:51.702312   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:51.702382   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:51.721772   43550 logs.go:277] 0 containers: []
	W0223 17:44:51.721786   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:51.721860   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:51.740874   43550 logs.go:277] 0 containers: []
	W0223 17:44:51.740889   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:51.740958   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:51.759678   43550 logs.go:277] 0 containers: []
	W0223 17:44:51.759690   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:51.759760   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:51.779218   43550 logs.go:277] 0 containers: []
	W0223 17:44:51.779233   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:51.779305   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:51.798469   43550 logs.go:277] 0 containers: []
	W0223 17:44:51.798482   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:51.798554   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:51.817646   43550 logs.go:277] 0 containers: []
	W0223 17:44:51.817658   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:51.817728   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:51.837682   43550 logs.go:277] 0 containers: []
	W0223 17:44:51.837696   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:51.837704   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:51.837712   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:51.878475   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:51.878491   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:51.890849   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:51.890869   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:51.946242   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:44:51.946254   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:51.946262   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:51.967910   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:51.967925   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:54.014693   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04671136s)
	I0223 17:44:56.516581   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:44:56.680847   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:44:56.700679   43550 logs.go:277] 0 containers: []
	W0223 17:44:56.700693   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:44:56.700769   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:44:56.720636   43550 logs.go:277] 0 containers: []
	W0223 17:44:56.720650   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:44:56.720726   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:44:56.740736   43550 logs.go:277] 0 containers: []
	W0223 17:44:56.740750   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:44:56.740825   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:44:56.760352   43550 logs.go:277] 0 containers: []
	W0223 17:44:56.760367   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:44:56.760444   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:44:56.780542   43550 logs.go:277] 0 containers: []
	W0223 17:44:56.780557   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:44:56.780628   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:44:56.801268   43550 logs.go:277] 0 containers: []
	W0223 17:44:56.801282   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:44:56.801360   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:44:56.820750   43550 logs.go:277] 0 containers: []
	W0223 17:44:56.820763   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:44:56.820832   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:44:56.840755   43550 logs.go:277] 0 containers: []
	W0223 17:44:56.840769   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:44:56.840777   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:44:56.840786   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:44:56.861967   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:44:56.861984   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:44:58.906848   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044807439s)
	I0223 17:44:58.906973   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:44:58.906984   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:44:58.944262   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:44:58.944281   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:44:58.956833   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:44:58.956847   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:44:59.012535   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:45:01.512797   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:45:01.681267   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:45:01.702950   43550 logs.go:277] 0 containers: []
	W0223 17:45:01.702967   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:45:01.703050   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:45:01.722438   43550 logs.go:277] 0 containers: []
	W0223 17:45:01.722451   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:45:01.722526   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:45:01.742041   43550 logs.go:277] 0 containers: []
	W0223 17:45:01.742055   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:45:01.742126   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:45:01.762013   43550 logs.go:277] 0 containers: []
	W0223 17:45:01.762027   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:45:01.762095   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:45:01.781356   43550 logs.go:277] 0 containers: []
	W0223 17:45:01.781369   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:45:01.781438   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:45:01.800635   43550 logs.go:277] 0 containers: []
	W0223 17:45:01.800648   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:45:01.800718   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:45:01.821219   43550 logs.go:277] 0 containers: []
	W0223 17:45:01.821233   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:45:01.821306   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:45:01.840919   43550 logs.go:277] 0 containers: []
	W0223 17:45:01.840932   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:45:01.840939   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:45:01.840946   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:45:01.880095   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:45:01.880113   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:45:01.893489   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:45:01.893505   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:45:01.951115   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:45:01.951126   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:45:01.951133   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:45:01.972525   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:45:01.972540   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:45:04.020694   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048097392s)
	I0223 17:45:06.522002   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:45:06.681612   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:45:06.704039   43550 logs.go:277] 0 containers: []
	W0223 17:45:06.704054   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:45:06.704124   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:45:06.726580   43550 logs.go:277] 0 containers: []
	W0223 17:45:06.726597   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:45:06.726679   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:45:06.747658   43550 logs.go:277] 0 containers: []
	W0223 17:45:06.747672   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:45:06.747742   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:45:06.767100   43550 logs.go:277] 0 containers: []
	W0223 17:45:06.767114   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:45:06.767182   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:45:06.786647   43550 logs.go:277] 0 containers: []
	W0223 17:45:06.786661   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:45:06.786733   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:45:06.805796   43550 logs.go:277] 0 containers: []
	W0223 17:45:06.805809   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:45:06.805879   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:45:06.826299   43550 logs.go:277] 0 containers: []
	W0223 17:45:06.826313   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:45:06.826397   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:45:06.845058   43550 logs.go:277] 0 containers: []
	W0223 17:45:06.845072   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:45:06.845079   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:45:06.845086   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:45:06.882640   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:45:06.882658   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:45:06.895543   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:45:06.895559   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:45:06.952107   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:45:06.952118   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:45:06.952126   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:45:06.973697   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:45:06.973710   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:45:09.017440   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043673363s)
	I0223 17:45:11.519026   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:45:11.681849   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:45:11.701765   43550 logs.go:277] 0 containers: []
	W0223 17:45:11.701781   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:45:11.701852   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:45:11.721337   43550 logs.go:277] 0 containers: []
	W0223 17:45:11.721350   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:45:11.721424   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:45:11.740995   43550 logs.go:277] 0 containers: []
	W0223 17:45:11.741010   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:45:11.741079   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:45:11.759861   43550 logs.go:277] 0 containers: []
	W0223 17:45:11.759873   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:45:11.759943   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:45:11.779353   43550 logs.go:277] 0 containers: []
	W0223 17:45:11.779368   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:45:11.779440   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:45:11.799113   43550 logs.go:277] 0 containers: []
	W0223 17:45:11.799127   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:45:11.799197   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:45:11.818739   43550 logs.go:277] 0 containers: []
	W0223 17:45:11.818756   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:45:11.818826   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:45:11.839656   43550 logs.go:277] 0 containers: []
	W0223 17:45:11.839670   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:45:11.839677   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:45:11.839685   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:45:11.851766   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:45:11.851779   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:45:11.907230   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:45:11.907241   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:45:11.907248   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:45:11.928830   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:45:11.928845   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:45:13.974567   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045664366s)
	I0223 17:45:13.974680   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:45:13.974688   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:45:16.514575   43550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:45:16.681354   43550 kubeadm.go:637] restartCluster took 4m11.144707967s
	W0223 17:45:16.681547   43550 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0223 17:45:16.681593   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 17:45:17.094482   43550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:45:17.104994   43550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 17:45:17.112711   43550 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 17:45:17.112761   43550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:45:17.120388   43550 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:45:17.120413   43550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 17:45:17.168934   43550 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 17:45:17.168972   43550 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 17:45:17.338153   43550 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:45:17.338242   43550 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:45:17.338368   43550 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:45:17.493557   43550 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:45:17.494255   43550 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:45:17.500924   43550 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 17:45:17.576096   43550 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:45:17.597793   43550 out.go:204]   - Generating certificates and keys ...
	I0223 17:45:17.597877   43550 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 17:45:17.597951   43550 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 17:45:17.598051   43550 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 17:45:17.598107   43550 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 17:45:17.598169   43550 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 17:45:17.598215   43550 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 17:45:17.598287   43550 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 17:45:17.598364   43550 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 17:45:17.598438   43550 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 17:45:17.598515   43550 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 17:45:17.598559   43550 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 17:45:17.598627   43550 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:45:17.686546   43550 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:45:17.920609   43550 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:45:18.020213   43550 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:45:18.101044   43550 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:45:18.101518   43550 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:45:18.123202   43550 out.go:204]   - Booting up control plane ...
	I0223 17:45:18.123374   43550 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:45:18.123494   43550 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:45:18.123631   43550 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:45:18.123749   43550 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:45:18.124030   43550 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:45:58.111682   43550 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 17:45:58.112608   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:45:58.112862   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:46:03.113526   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:46:03.113723   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:46:13.114746   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:46:13.114920   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:46:33.116190   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:46:33.116474   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:47:13.117686   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:47:13.117840   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:47:13.117848   43550 kubeadm.go:322] 
	I0223 17:47:13.117902   43550 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 17:47:13.117929   43550 kubeadm.go:322] 	timed out waiting for the condition
	I0223 17:47:13.117935   43550 kubeadm.go:322] 
	I0223 17:47:13.117961   43550 kubeadm.go:322] This error is likely caused by:
	I0223 17:47:13.118002   43550 kubeadm.go:322] 	- The kubelet is not running
	I0223 17:47:13.118096   43550 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 17:47:13.118106   43550 kubeadm.go:322] 
	I0223 17:47:13.118224   43550 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 17:47:13.118285   43550 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 17:47:13.118327   43550 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 17:47:13.118336   43550 kubeadm.go:322] 
	I0223 17:47:13.118427   43550 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 17:47:13.118508   43550 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 17:47:13.118578   43550 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 17:47:13.118616   43550 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 17:47:13.118685   43550 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 17:47:13.118710   43550 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 17:47:13.121411   43550 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 17:47:13.121491   43550 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 17:47:13.121600   43550 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 17:47:13.121688   43550 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:47:13.121764   43550 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 17:47:13.121818   43550 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 17:47:13.121939   43550 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 17:47:13.121966   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 17:47:13.533482   43550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:47:13.543630   43550 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 17:47:13.543688   43550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:47:13.551231   43550 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 17:47:13.551248   43550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 17:47:13.598593   43550 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 17:47:13.598633   43550 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 17:47:13.763043   43550 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 17:47:13.763138   43550 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 17:47:13.763248   43550 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 17:47:13.914601   43550 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 17:47:13.915360   43550 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 17:47:13.921875   43550 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 17:47:13.994511   43550 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 17:47:14.016063   43550 out.go:204]   - Generating certificates and keys ...
	I0223 17:47:14.016144   43550 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 17:47:14.016226   43550 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 17:47:14.016312   43550 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 17:47:14.016391   43550 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 17:47:14.016474   43550 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 17:47:14.016549   43550 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 17:47:14.016628   43550 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 17:47:14.016720   43550 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 17:47:14.016807   43550 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 17:47:14.016861   43550 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 17:47:14.016896   43550 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 17:47:14.016944   43550 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 17:47:14.149254   43550 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 17:47:14.216253   43550 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 17:47:14.269677   43550 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 17:47:14.385848   43550 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 17:47:14.386305   43550 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 17:47:14.407835   43550 out.go:204]   - Booting up control plane ...
	I0223 17:47:14.407917   43550 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 17:47:14.408000   43550 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 17:47:14.408062   43550 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 17:47:14.408123   43550 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 17:47:14.408243   43550 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 17:47:54.395462   43550 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 17:47:54.395687   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:47:54.395837   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:47:59.396704   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:47:59.396869   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:48:09.398120   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:48:09.398349   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:48:29.399113   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:48:29.399302   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:49:09.401547   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:49:09.401823   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:49:09.401833   43550 kubeadm.go:322] 
	I0223 17:49:09.401869   43550 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 17:49:09.401902   43550 kubeadm.go:322] 	timed out waiting for the condition
	I0223 17:49:09.401908   43550 kubeadm.go:322] 
	I0223 17:49:09.401944   43550 kubeadm.go:322] This error is likely caused by:
	I0223 17:49:09.401973   43550 kubeadm.go:322] 	- The kubelet is not running
	I0223 17:49:09.402062   43550 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 17:49:09.402074   43550 kubeadm.go:322] 
	I0223 17:49:09.402148   43550 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 17:49:09.402172   43550 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 17:49:09.402195   43550 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 17:49:09.402204   43550 kubeadm.go:322] 
	I0223 17:49:09.402285   43550 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 17:49:09.402362   43550 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 17:49:09.402428   43550 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 17:49:09.402465   43550 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 17:49:09.402529   43550 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 17:49:09.402554   43550 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 17:49:09.405240   43550 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 17:49:09.405314   43550 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 17:49:09.405419   43550 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 17:49:09.405498   43550 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:49:09.405568   43550 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 17:49:09.405639   43550 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 17:49:09.405651   43550 kubeadm.go:403] StartCluster complete in 8m3.891022859s
	I0223 17:49:09.405747   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:49:09.424664   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.424677   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:49:09.424750   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:49:09.443039   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.443053   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:49:09.443124   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:49:09.462259   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.462272   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:49:09.462342   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:49:09.481433   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.481447   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:49:09.481520   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:49:09.500343   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.500355   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:49:09.500425   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:49:09.520247   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.520259   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:49:09.520331   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:49:09.539552   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.539565   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:49:09.539647   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:49:09.559238   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.559252   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:49:09.559260   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:49:09.559268   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:49:09.599250   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:49:09.599267   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:49:09.611616   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:49:09.611631   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:49:09.666004   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:49:09.666015   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:49:09.666021   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:49:09.687546   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:49:09.687561   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:49:11.732615   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044996686s)
	W0223 17:49:11.732756   43550 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 17:49:11.732773   43550 out.go:239] * 
	* 
	W0223 17:49:11.732873   43550 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 17:49:11.732885   43550 out.go:239] * 
	* 
	W0223 17:49:11.733482   43550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 17:49:11.796235   43550 out.go:177] 
	W0223 17:49:11.838537   43550 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 17:49:11.838742   43550 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 17:49:11.838860   43550 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 17:49:11.880326   43550 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-977000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-977000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-977000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c",
	        "Created": "2023-02-24T01:35:14.380880766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 680996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:41:01.55601187Z",
	            "FinishedAt": "2023-02-24T01:40:58.627805576Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hostname",
	        "HostsPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hosts",
	        "LogPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c-json.log",
	        "Name": "/old-k8s-version-977000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-977000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-977000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-977000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-977000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-977000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8291751a84c1dddad11fd7ac12404858ad006f75c5dafa636657a7c0e1ee1362",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61774"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61775"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61776"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61772"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61773"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8291751a84c1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-977000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "821104c1f595",
	                        "old-k8s-version-977000"
	                    ],
	                    "NetworkID": "b62760f23c198a189d083cf13177930b5af19fc8f1e171a0fb08c0832e6d6e8a",
	                    "EndpointID": "9f237184db045478e6ca36e5f258df6af6202dc7e10c51e247336bee184a0143",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 2 (398.517606ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-977000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-977000 logs -n 25: (3.369331326s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-152000 sudo                             | bridge-152000          | jenkins | v1.29.0 | 23 Feb 23 17:35 PST |                     |
	|         | systemctl status crio --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-152000 sudo                             | bridge-152000          | jenkins | v1.29.0 | 23 Feb 23 17:35 PST | 23 Feb 23 17:35 PST |
	|         | systemctl cat crio --no-pager                     |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-977000                         | old-k8s-version-977000 | jenkins | v1.29.0 | 23 Feb 23 17:35 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-152000 sudo find                        | bridge-152000          | jenkins | v1.29.0 | 23 Feb 23 17:35 PST | 23 Feb 23 17:35 PST |
	|         | /etc/crio -type f -exec sh -c                     |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-152000 sudo crio                        | bridge-152000          | jenkins | v1.29.0 | 23 Feb 23 17:35 PST | 23 Feb 23 17:35 PST |
	|         | config                                            |                        |         |         |                     |                     |
	| delete  | -p bridge-152000                                  | bridge-152000          | jenkins | v1.29.0 | 23 Feb 23 17:35 PST | 23 Feb 23 17:35 PST |
	| start   | -p no-preload-732000                              | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:35 PST | 23 Feb 23 17:36 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-732000        | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:36 PST | 23 Feb 23 17:36 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-732000                              | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:36 PST | 23 Feb 23 17:36 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-732000             | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:36 PST | 23 Feb 23 17:36 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-732000                              | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:36 PST | 23 Feb 23 17:45 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-977000   | old-k8s-version-977000 | jenkins | v1.29.0 | 23 Feb 23 17:39 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-977000                         | old-k8s-version-977000 | jenkins | v1.29.0 | 23 Feb 23 17:40 PST | 23 Feb 23 17:40 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-977000        | old-k8s-version-977000 | jenkins | v1.29.0 | 23 Feb 23 17:40 PST | 23 Feb 23 17:41 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-977000                         | old-k8s-version-977000 | jenkins | v1.29.0 | 23 Feb 23 17:41 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-732000 sudo                         | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-732000                              | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-732000                              | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-732000                              | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	| delete  | -p no-preload-732000                              | no-preload-732000      | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	| start   | -p embed-certs-309000                             | embed-certs-309000     | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309000       | embed-certs-309000     | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:47 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-309000                             | embed-certs-309000     | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:47 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309000            | embed-certs-309000     | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:47 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-309000                             | embed-certs-309000     | jenkins | v1.29.0 | 23 Feb 23 17:47 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 17:47:17
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 17:47:17.189762   44301 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:47:17.189940   44301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:47:17.189944   44301 out.go:309] Setting ErrFile to fd 2...
	I0223 17:47:17.189950   44301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:47:17.190061   44301 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:47:17.191440   44301 out.go:303] Setting JSON to false
	I0223 17:47:17.209985   44301 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10012,"bootTime":1677193225,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 17:47:17.210061   44301 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 17:47:17.231861   44301 out.go:177] * [embed-certs-309000] minikube v1.29.0 on Darwin 13.2
	I0223 17:47:17.274929   44301 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 17:47:17.274932   44301 notify.go:220] Checking for updates...
	I0223 17:47:17.318586   44301 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:47:17.341619   44301 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 17:47:17.362720   44301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 17:47:17.383683   44301 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 17:47:17.404521   44301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 17:47:17.426301   44301 config.go:182] Loaded profile config "embed-certs-309000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:47:17.426987   44301 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 17:47:17.489415   44301 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 17:47:17.489520   44301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:47:17.630841   44301 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:47:17.538245369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:47:17.673375   44301 out.go:177] * Using the docker driver based on existing profile
	I0223 17:47:17.694609   44301 start.go:296] selected driver: docker
	I0223 17:47:17.694639   44301 start.go:857] validating driver "docker" against &{Name:embed-certs-309000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-309000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:47:17.694742   44301 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 17:47:17.698308   44301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:47:17.839576   44301 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:47:17.74705101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:47:17.839719   44301 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 17:47:17.839737   44301 cni.go:84] Creating CNI manager for ""
	I0223 17:47:17.839750   44301 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 17:47:17.839759   44301 start_flags.go:319] config:
	{Name:embed-certs-309000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-309000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:47:17.883111   44301 out.go:177] * Starting control plane node embed-certs-309000 in cluster embed-certs-309000
	I0223 17:47:17.906525   44301 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:47:17.928463   44301 out.go:177] * Pulling base image ...
	I0223 17:47:17.970604   44301 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:47:17.970678   44301 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:47:17.970703   44301 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 17:47:17.970723   44301 cache.go:57] Caching tarball of preloaded images
	I0223 17:47:17.970925   44301 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:47:17.970945   44301 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 17:47:17.971691   44301 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000/config.json ...
	I0223 17:47:18.028003   44301 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:47:18.028021   44301 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:47:18.028042   44301 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:47:18.028080   44301 start.go:364] acquiring machines lock for embed-certs-309000: {Name:mkedc3a59481951d17de738e44d49e6b32184d99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:47:18.028171   44301 start.go:368] acquired machines lock for "embed-certs-309000" in 73.9µs
	I0223 17:47:18.028201   44301 start.go:96] Skipping create...Using existing machine configuration
	I0223 17:47:18.028210   44301 fix.go:55] fixHost starting: 
	I0223 17:47:18.028449   44301 cli_runner.go:164] Run: docker container inspect embed-certs-309000 --format={{.State.Status}}
	I0223 17:47:18.085161   44301 fix.go:103] recreateIfNeeded on embed-certs-309000: state=Stopped err=<nil>
	W0223 17:47:18.085202   44301 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 17:47:18.127431   44301 out.go:177] * Restarting existing docker container for "embed-certs-309000" ...
	I0223 17:47:18.148758   44301 cli_runner.go:164] Run: docker start embed-certs-309000
	I0223 17:47:18.482413   44301 cli_runner.go:164] Run: docker container inspect embed-certs-309000 --format={{.State.Status}}
	I0223 17:47:18.543956   44301 kic.go:426] container "embed-certs-309000" state is running.
	I0223 17:47:18.544521   44301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-309000
	I0223 17:47:18.607543   44301 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000/config.json ...
	I0223 17:47:18.607980   44301 machine.go:88] provisioning docker machine ...
	I0223 17:47:18.608005   44301 ubuntu.go:169] provisioning hostname "embed-certs-309000"
	I0223 17:47:18.608072   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:18.679699   44301 main.go:141] libmachine: Using SSH client type: native
	I0223 17:47:18.680105   44301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61923 <nil> <nil>}
	I0223 17:47:18.680117   44301 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-309000 && echo "embed-certs-309000" | sudo tee /etc/hostname
	I0223 17:47:18.841931   44301 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-309000
	
	I0223 17:47:18.842017   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:18.905934   44301 main.go:141] libmachine: Using SSH client type: native
	I0223 17:47:18.906341   44301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61923 <nil> <nil>}
	I0223 17:47:18.906360   44301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-309000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-309000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-309000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:47:19.042653   44301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:47:19.042675   44301 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:47:19.042693   44301 ubuntu.go:177] setting up certificates
	I0223 17:47:19.042703   44301 provision.go:83] configureAuth start
	I0223 17:47:19.042783   44301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-309000
	I0223 17:47:19.101044   44301 provision.go:138] copyHostCerts
	I0223 17:47:19.101143   44301 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:47:19.101153   44301 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:47:19.101242   44301 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:47:19.101451   44301 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:47:19.101459   44301 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:47:19.101516   44301 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:47:19.101660   44301 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:47:19.101666   44301 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:47:19.101725   44301 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:47:19.101848   44301 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.embed-certs-309000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-309000]
	I0223 17:47:19.157589   44301 provision.go:172] copyRemoteCerts
	I0223 17:47:19.157675   44301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:47:19.157733   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:19.216413   44301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61923 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/embed-certs-309000/id_rsa Username:docker}
	I0223 17:47:19.312430   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 17:47:19.329990   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:47:19.347480   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0223 17:47:19.364781   44301 provision.go:86] duration metric: configureAuth took 322.057288ms
	I0223 17:47:19.364794   44301 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:47:19.364952   44301 config.go:182] Loaded profile config "embed-certs-309000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:47:19.365013   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:19.422918   44301 main.go:141] libmachine: Using SSH client type: native
	I0223 17:47:19.423262   44301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61923 <nil> <nil>}
	I0223 17:47:19.423271   44301 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:47:19.559783   44301 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:47:19.559795   44301 ubuntu.go:71] root file system type: overlay
	I0223 17:47:19.559880   44301 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:47:19.559957   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:19.618360   44301 main.go:141] libmachine: Using SSH client type: native
	I0223 17:47:19.618717   44301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61923 <nil> <nil>}
	I0223 17:47:19.618766   44301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:47:19.763033   44301 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:47:19.763131   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:19.820415   44301 main.go:141] libmachine: Using SSH client type: native
	I0223 17:47:19.820767   44301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61923 <nil> <nil>}
	I0223 17:47:19.820780   44301 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:47:19.959459   44301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:47:19.959476   44301 machine.go:91] provisioned docker machine in 1.351457307s
	I0223 17:47:19.959485   44301 start.go:300] post-start starting for "embed-certs-309000" (driver="docker")
	I0223 17:47:19.959491   44301 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:47:19.959594   44301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:47:19.959660   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:20.018168   44301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61923 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/embed-certs-309000/id_rsa Username:docker}
	I0223 17:47:20.113334   44301 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:47:20.116861   44301 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:47:20.116878   44301 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:47:20.116892   44301 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:47:20.116897   44301 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:47:20.116904   44301 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:47:20.116991   44301 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:47:20.117149   44301 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:47:20.117331   44301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:47:20.124816   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:47:20.142248   44301 start.go:303] post-start completed in 182.747775ms
	I0223 17:47:20.142342   44301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:47:20.142397   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:20.219866   44301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61923 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/embed-certs-309000/id_rsa Username:docker}
	I0223 17:47:20.314199   44301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:47:20.318861   44301 fix.go:57] fixHost completed within 2.290597695s
	I0223 17:47:20.318877   44301 start.go:83] releasing machines lock for "embed-certs-309000", held for 2.290648045s
	I0223 17:47:20.318960   44301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-309000
	I0223 17:47:20.376238   44301 ssh_runner.go:195] Run: cat /version.json
	I0223 17:47:20.376262   44301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 17:47:20.376337   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:20.376340   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:20.443779   44301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61923 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/embed-certs-309000/id_rsa Username:docker}
	I0223 17:47:20.444532   44301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61923 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/embed-certs-309000/id_rsa Username:docker}
	I0223 17:47:20.588600   44301 ssh_runner.go:195] Run: systemctl --version
	I0223 17:47:20.593464   44301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 17:47:20.598634   44301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 17:47:20.614225   44301 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 17:47:20.614313   44301 ssh_runner.go:195] Run: which cri-dockerd
	I0223 17:47:20.618382   44301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 17:47:20.625901   44301 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 17:47:20.638811   44301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 17:47:20.646450   44301 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0223 17:47:20.646469   44301 start.go:485] detecting cgroup driver to use...
	I0223 17:47:20.646480   44301 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:47:20.646562   44301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:47:20.659391   44301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 17:47:20.668196   44301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:47:20.677262   44301 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:47:20.677326   44301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:47:20.685762   44301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:47:20.694194   44301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:47:20.702878   44301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:47:20.711489   44301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:47:20.719408   44301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:47:20.727891   44301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:47:20.735393   44301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:47:20.742606   44301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:47:20.819946   44301 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:47:20.892622   44301 start.go:485] detecting cgroup driver to use...
	I0223 17:47:20.892641   44301 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:47:20.892704   44301 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:47:20.903689   44301 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:47:20.903767   44301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:47:20.915471   44301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:47:20.929787   44301 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:47:20.996824   44301 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:47:21.097801   44301 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:47:21.097825   44301 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:47:21.137463   44301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:47:21.201332   44301 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:47:21.499746   44301 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:47:21.564299   44301 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 17:47:21.635327   44301 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:47:21.703333   44301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:47:21.774498   44301 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 17:47:21.786197   44301 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 17:47:21.786284   44301 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 17:47:21.790339   44301 start.go:553] Will wait 60s for crictl version
	I0223 17:47:21.790381   44301 ssh_runner.go:195] Run: which crictl
	I0223 17:47:21.793980   44301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 17:47:21.898867   44301 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 17:47:21.898949   44301 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:47:21.925413   44301 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:47:21.995667   44301 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 17:47:21.995861   44301 cli_runner.go:164] Run: docker exec -t embed-certs-309000 dig +short host.docker.internal
	I0223 17:47:22.105858   44301 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:47:22.106002   44301 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:47:22.110547   44301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:47:22.120588   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:22.179030   44301 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:47:22.179109   44301 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:47:22.200811   44301 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 17:47:22.211678   44301 docker.go:560] Images already preloaded, skipping extraction
	I0223 17:47:22.211775   44301 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:47:22.232500   44301 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 17:47:22.232520   44301 cache_images.go:84] Images are preloaded, skipping loading
	I0223 17:47:22.232604   44301 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:47:22.258023   44301 cni.go:84] Creating CNI manager for ""
	I0223 17:47:22.258040   44301 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 17:47:22.258058   44301 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:47:22.258077   44301 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-309000 NodeName:embed-certs-309000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:47:22.258200   44301 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-309000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:47:22.258269   44301 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-309000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:embed-certs-309000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 17:47:22.258347   44301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 17:47:22.266354   44301 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:47:22.266415   44301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 17:47:22.273673   44301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0223 17:47:22.286613   44301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:47:22.299452   44301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0223 17:47:22.312323   44301 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:47:22.316117   44301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:47:22.325961   44301 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000 for IP: 192.168.67.2
	I0223 17:47:22.325980   44301 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:47:22.326143   44301 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:47:22.326194   44301 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:47:22.326313   44301 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000/client.key
	I0223 17:47:22.326379   44301 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000/apiserver.key.c7fa3a9e
	I0223 17:47:22.326432   44301 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000/proxy-client.key
	I0223 17:47:22.326651   44301 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:47:22.326693   44301 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:47:22.326704   44301 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:47:22.326762   44301 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:47:22.326795   44301 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:47:22.326831   44301 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:47:22.326897   44301 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:47:22.327444   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 17:47:22.345415   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 17:47:22.362508   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 17:47:22.379861   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/embed-certs-309000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 17:47:22.397189   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:47:22.414885   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:47:22.432257   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:47:22.449739   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:47:22.467249   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:47:22.484994   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:47:22.502580   44301 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:47:22.519829   44301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 17:47:22.532818   44301 ssh_runner.go:195] Run: openssl version
	I0223 17:47:22.538426   44301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:47:22.546613   44301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:47:22.550678   44301 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:47:22.550731   44301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:47:22.556360   44301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:47:22.564129   44301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:47:22.572379   44301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:47:22.576549   44301 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:47:22.576616   44301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:47:22.582147   44301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:47:22.589890   44301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:47:22.598091   44301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:47:22.602258   44301 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:47:22.602327   44301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:47:22.608371   44301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:47:22.616525   44301 kubeadm.go:401] StartCluster: {Name:embed-certs-309000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-309000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:47:22.616655   44301 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:47:22.639283   44301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 17:47:22.648093   44301 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 17:47:22.648110   44301 kubeadm.go:633] restartCluster start
	I0223 17:47:22.648181   44301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 17:47:22.656421   44301 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:22.656508   44301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-309000
	I0223 17:47:22.718057   44301 kubeconfig.go:135] verify returned: extract IP: "embed-certs-309000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:47:22.718222   44301 kubeconfig.go:146] "embed-certs-309000" context is missing from /Users/jenkins/minikube-integration/15909-24428/kubeconfig - will repair!
	I0223 17:47:22.718573   44301 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/kubeconfig: {Name:mk7d15723b32e59bb8ea0777461e49fb0d77cb39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:47:22.720119   44301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 17:47:22.728482   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:22.728550   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:22.737221   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:23.239341   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:23.239539   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:23.250302   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:23.737826   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:23.737937   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:23.749188   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:24.237386   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:24.237496   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:24.248361   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:24.739422   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:24.739570   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:24.750643   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:25.237505   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:25.237664   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:25.248610   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:25.737460   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:25.737697   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:25.748543   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:26.238573   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:26.238742   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:26.250172   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:26.738622   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:26.738767   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:26.749998   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:27.239467   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:27.239734   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:27.250884   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:27.739450   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:27.739713   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:27.750951   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:28.238052   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:28.238164   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:28.249214   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:28.738620   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:28.738838   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:28.749809   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:29.238101   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:29.238249   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:29.250051   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:29.737879   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:29.738039   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:29.748601   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:30.237791   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:30.237939   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:30.249087   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:30.738764   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:30.738908   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:30.750047   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:31.239558   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:31.239710   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:31.250716   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:31.739604   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:31.739758   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:31.750883   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:32.237534   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:32.237634   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:32.247635   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:32.739575   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:32.739826   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:32.750932   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:32.750942   44301 api_server.go:165] Checking apiserver status ...
	I0223 17:47:32.750994   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:47:32.759353   44301 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:32.759366   44301 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 17:47:32.759373   44301 kubeadm.go:1120] stopping kube-system containers ...
	I0223 17:47:32.759443   44301 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:47:32.780243   44301 docker.go:456] Stopping containers: [4abdb666eb52 d002255507da 58c29c548e9c 9af4914a53bc f27e579aceec aa6a28bf2fa5 9c398f8fffb1 0d06178c66dc bb1d08436003 f286abe7a9ef b0ef1cf3e090 0cd39751db79 89933bcfed01 d01bd5151b0a 4f20d2d6baae b43b9b409e20]
	I0223 17:47:32.780332   44301 ssh_runner.go:195] Run: docker stop 4abdb666eb52 d002255507da 58c29c548e9c 9af4914a53bc f27e579aceec aa6a28bf2fa5 9c398f8fffb1 0d06178c66dc bb1d08436003 f286abe7a9ef b0ef1cf3e090 0cd39751db79 89933bcfed01 d01bd5151b0a 4f20d2d6baae b43b9b409e20
	I0223 17:47:32.800871   44301 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 17:47:32.811768   44301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:47:32.819630   44301 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 24 01:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 24 01:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb 24 01:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 24 01:46 /etc/kubernetes/scheduler.conf
	
	I0223 17:47:32.819687   44301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 17:47:32.827319   44301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 17:47:32.835044   44301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 17:47:32.842553   44301 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:32.842614   44301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0223 17:47:32.849763   44301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 17:47:32.857400   44301 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:47:32.857449   44301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0223 17:47:32.864828   44301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 17:47:32.873391   44301 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 17:47:32.873406   44301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:47:32.929089   44301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:47:33.383004   44301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:47:33.519081   44301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:47:33.577283   44301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:47:33.675949   44301 api_server.go:51] waiting for apiserver process to appear ...
	I0223 17:47:33.676027   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:47:34.239064   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:47:34.738779   44301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:47:34.753047   44301 api_server.go:71] duration metric: took 1.077072305s to wait for apiserver process to appear ...
	I0223 17:47:34.753062   44301 api_server.go:87] waiting for apiserver healthz status ...
	I0223 17:47:34.753077   44301 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61922/healthz ...
	I0223 17:47:34.754329   44301 api_server.go:268] stopped: https://127.0.0.1:61922/healthz: Get "https://127.0.0.1:61922/healthz": EOF
	I0223 17:47:35.254591   44301 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61922/healthz ...
	I0223 17:47:37.099542   44301 api_server.go:278] https://127.0.0.1:61922/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 17:47:37.099562   44301 api_server.go:102] status: https://127.0.0.1:61922/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 17:47:37.254794   44301 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61922/healthz ...
	I0223 17:47:37.260659   44301 api_server.go:278] https://127.0.0.1:61922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 17:47:37.260675   44301 api_server.go:102] status: https://127.0.0.1:61922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 17:47:37.754708   44301 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61922/healthz ...
	I0223 17:47:37.761105   44301 api_server.go:278] https://127.0.0.1:61922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 17:47:37.761117   44301 api_server.go:102] status: https://127.0.0.1:61922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 17:47:38.254512   44301 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61922/healthz ...
	I0223 17:47:38.259607   44301 api_server.go:278] https://127.0.0.1:61922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 17:47:38.259626   44301 api_server.go:102] status: https://127.0.0.1:61922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 17:47:38.755773   44301 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61922/healthz ...
	I0223 17:47:38.762644   44301 api_server.go:278] https://127.0.0.1:61922/healthz returned 200:
	ok
	I0223 17:47:38.769720   44301 api_server.go:140] control plane version: v1.26.1
	I0223 17:47:38.769734   44301 api_server.go:130] duration metric: took 4.016577595s to wait for apiserver health ...
	I0223 17:47:38.769740   44301 cni.go:84] Creating CNI manager for ""
	I0223 17:47:38.769748   44301 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 17:47:38.791602   44301 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 17:47:38.813278   44301 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 17:47:38.823215   44301 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 17:47:38.836201   44301 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 17:47:38.845063   44301 system_pods.go:59] 8 kube-system pods found
	I0223 17:47:38.845080   44301 system_pods.go:61] "coredns-787d4945fb-2lrsj" [ec65be87-bc37-49f9-9e94-2dd58056f66c] Running
	I0223 17:47:38.845084   44301 system_pods.go:61] "etcd-embed-certs-309000" [d1d1cf8d-85ed-477a-b89d-2f616df5d375] Running
	I0223 17:47:38.845088   44301 system_pods.go:61] "kube-apiserver-embed-certs-309000" [0a799474-baba-4d8a-a330-47a753115e24] Running
	I0223 17:47:38.845092   44301 system_pods.go:61] "kube-controller-manager-embed-certs-309000" [3fbdbfc9-cfad-4c07-b8f4-60aea0a8ef1d] Running
	I0223 17:47:38.845095   44301 system_pods.go:61] "kube-proxy-czkbp" [8d2d820f-2338-41f2-b577-9432e2ac0db1] Running
	I0223 17:47:38.845099   44301 system_pods.go:61] "kube-scheduler-embed-certs-309000" [93da1970-003a-440c-838b-6a4e093ef5ea] Running
	I0223 17:47:38.845108   44301 system_pods.go:61] "metrics-server-7997d45854-54fpg" [9f29859f-7f92-41ef-8b3e-0d8e38ba4639] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 17:47:38.845111   44301 system_pods.go:61] "storage-provisioner" [4f56d10c-d067-445c-b59f-5e96b2b6dd46] Running
	I0223 17:47:38.845115   44301 system_pods.go:74] duration metric: took 8.903727ms to wait for pod list to return data ...
	I0223 17:47:38.845122   44301 node_conditions.go:102] verifying NodePressure condition ...
	I0223 17:47:38.848541   44301 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:47:38.848553   44301 node_conditions.go:123] node cpu capacity is 6
	I0223 17:47:38.848575   44301 node_conditions.go:105] duration metric: took 3.447451ms to run NodePressure ...
	I0223 17:47:38.848592   44301 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:47:39.136831   44301 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0223 17:47:39.141891   44301 retry.go:31] will retry after 159.667835ms: kubelet not initialised
	I0223 17:47:39.338513   44301 retry.go:31] will retry after 336.405575ms: kubelet not initialised
	I0223 17:47:39.681611   44301 kubeadm.go:784] kubelet initialised
	I0223 17:47:39.681626   44301 kubeadm.go:785] duration metric: took 544.766275ms waiting for restarted kubelet to initialise ...
	I0223 17:47:39.681636   44301 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:47:39.690184   44301 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-2lrsj" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:39.699336   44301 pod_ready.go:92] pod "coredns-787d4945fb-2lrsj" in "kube-system" namespace has status "Ready":"True"
	I0223 17:47:39.699347   44301 pod_ready.go:81] duration metric: took 9.148977ms waiting for pod "coredns-787d4945fb-2lrsj" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:39.699368   44301 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-309000" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:41.716338   44301 pod_ready.go:102] pod "etcd-embed-certs-309000" in "kube-system" namespace has status "Ready":"False"
	I0223 17:47:44.214891   44301 pod_ready.go:102] pod "etcd-embed-certs-309000" in "kube-system" namespace has status "Ready":"False"
	I0223 17:47:45.716330   44301 pod_ready.go:92] pod "etcd-embed-certs-309000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:47:45.716346   44301 pod_ready.go:81] duration metric: took 6.016836878s waiting for pod "etcd-embed-certs-309000" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:45.716352   44301 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-309000" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:47.727045   44301 pod_ready.go:102] pod "kube-apiserver-embed-certs-309000" in "kube-system" namespace has status "Ready":"False"
	I0223 17:47:49.728512   44301 pod_ready.go:102] pod "kube-apiserver-embed-certs-309000" in "kube-system" namespace has status "Ready":"False"
	I0223 17:47:51.729702   44301 pod_ready.go:102] pod "kube-apiserver-embed-certs-309000" in "kube-system" namespace has status "Ready":"False"
	I0223 17:47:54.395462   43550 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 17:47:54.395687   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:47:54.395837   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:47:52.726881   44301 pod_ready.go:92] pod "kube-apiserver-embed-certs-309000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:47:52.726894   44301 pod_ready.go:81] duration metric: took 7.010382146s waiting for pod "kube-apiserver-embed-certs-309000" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:52.726903   44301 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-309000" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:52.731960   44301 pod_ready.go:92] pod "kube-controller-manager-embed-certs-309000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:47:52.731969   44301 pod_ready.go:81] duration metric: took 5.059995ms waiting for pod "kube-controller-manager-embed-certs-309000" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:52.731975   44301 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-czkbp" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:52.736829   44301 pod_ready.go:92] pod "kube-proxy-czkbp" in "kube-system" namespace has status "Ready":"True"
	I0223 17:47:52.736838   44301 pod_ready.go:81] duration metric: took 4.841559ms waiting for pod "kube-proxy-czkbp" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:52.736845   44301 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-309000" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:53.747939   44301 pod_ready.go:92] pod "kube-scheduler-embed-certs-309000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:47:53.747953   44301 pod_ready.go:81] duration metric: took 1.011081754s waiting for pod "kube-scheduler-embed-certs-309000" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:53.747960   44301 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace to be "Ready" ...
	I0223 17:47:55.761468   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:47:59.396704   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:47:59.396869   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:47:58.260710   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:00.260988   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:02.759850   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:04.761188   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:09.398120   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:48:09.398349   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:48:07.260711   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:09.760727   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:11.761408   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:13.762715   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:16.262816   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:18.761173   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:20.761250   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:22.762354   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:25.261379   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:29.399113   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:48:29.399302   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:48:27.263010   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:29.761963   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:32.260769   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:34.762847   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:37.262427   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:39.762628   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:42.262150   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:44.262911   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:46.761014   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:48.762106   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:51.261988   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:53.761239   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:55.762433   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:48:58.260890   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:49:00.262289   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:49:02.764116   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:49:05.261147   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:49:09.401547   43550 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 17:49:09.401823   43550 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 17:49:09.401833   43550 kubeadm.go:322] 
	I0223 17:49:09.401869   43550 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 17:49:09.401902   43550 kubeadm.go:322] 	timed out waiting for the condition
	I0223 17:49:09.401908   43550 kubeadm.go:322] 
	I0223 17:49:09.401944   43550 kubeadm.go:322] This error is likely caused by:
	I0223 17:49:09.401973   43550 kubeadm.go:322] 	- The kubelet is not running
	I0223 17:49:09.402062   43550 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 17:49:09.402074   43550 kubeadm.go:322] 
	I0223 17:49:09.402148   43550 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 17:49:09.402172   43550 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 17:49:09.402195   43550 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 17:49:09.402204   43550 kubeadm.go:322] 
	I0223 17:49:09.402285   43550 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 17:49:09.402362   43550 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 17:49:09.402428   43550 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 17:49:09.402465   43550 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 17:49:09.402529   43550 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 17:49:09.402554   43550 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 17:49:09.405240   43550 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 17:49:09.405314   43550 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 17:49:09.405419   43550 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 17:49:09.405498   43550 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 17:49:09.405568   43550 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 17:49:09.405639   43550 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 17:49:09.405651   43550 kubeadm.go:403] StartCluster complete in 8m3.891022859s
	I0223 17:49:09.405747   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 17:49:09.424664   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.424677   43550 logs.go:279] No container was found matching "kube-apiserver"
	I0223 17:49:09.424750   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 17:49:09.443039   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.443053   43550 logs.go:279] No container was found matching "etcd"
	I0223 17:49:09.443124   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 17:49:09.462259   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.462272   43550 logs.go:279] No container was found matching "coredns"
	I0223 17:49:09.462342   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 17:49:09.481433   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.481447   43550 logs.go:279] No container was found matching "kube-scheduler"
	I0223 17:49:09.481520   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 17:49:09.500343   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.500355   43550 logs.go:279] No container was found matching "kube-proxy"
	I0223 17:49:09.500425   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 17:49:09.520247   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.520259   43550 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 17:49:09.520331   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 17:49:09.539552   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.539565   43550 logs.go:279] No container was found matching "kindnet"
	I0223 17:49:09.539647   43550 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 17:49:09.559238   43550 logs.go:277] 0 containers: []
	W0223 17:49:09.559252   43550 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 17:49:09.559260   43550 logs.go:123] Gathering logs for kubelet ...
	I0223 17:49:09.559268   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 17:49:09.599250   43550 logs.go:123] Gathering logs for dmesg ...
	I0223 17:49:09.599267   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 17:49:09.611616   43550 logs.go:123] Gathering logs for describe nodes ...
	I0223 17:49:09.611631   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 17:49:09.666004   43550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 17:49:09.666015   43550 logs.go:123] Gathering logs for Docker ...
	I0223 17:49:09.666021   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 17:49:09.687546   43550 logs.go:123] Gathering logs for container status ...
	I0223 17:49:09.687561   43550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 17:49:11.732615   43550 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044996686s)
	W0223 17:49:11.732756   43550 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 17:49:11.732773   43550 out.go:239] * 
	W0223 17:49:11.732873   43550 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 17:49:11.732885   43550 out.go:239] * 
	W0223 17:49:11.733482   43550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 17:49:11.796235   43550 out.go:177] 
	W0223 17:49:11.838537   43550 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 17:49:11.838742   43550 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 17:49:11.838860   43550 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 17:49:11.880326   43550 out.go:177] 
	I0223 17:49:07.261345   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:49:09.264522   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	I0223 17:49:11.760346   44301 pod_ready.go:102] pod "metrics-server-7997d45854-54fpg" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 01:41:01 UTC, end at Fri 2023-02-24 01:49:13 UTC. --
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.533706459Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534051984Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534106084Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534921800Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534961430Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534980167Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534989586Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535101973Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535158123Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535182760Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535229638Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535302939Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535357417Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535589749Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535654552Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.536109842Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.624542375Z" level=info msg="Loading containers: start."
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.705830368Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.738778244Z" level=info msg="Loading containers: done."
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.747281313Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.747342089Z" level=info msg="Daemon has completed initialization"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.769248376Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.772753001Z" level=info msg="API listen on [::]:2376"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.779603495Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-02-24T01:49:15Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  01:49:15 up  2:48,  0 users,  load average: 0.62, 0.89, 1.34
	Linux old-k8s-version-977000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 01:41:01 UTC, end at Fri 2023-02-24 01:49:15 UTC. --
	Feb 24 01:49:13 old-k8s-version-977000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 01:49:14 old-k8s-version-977000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 160.
	Feb 24 01:49:14 old-k8s-version-977000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 01:49:14 old-k8s-version-977000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 01:49:14 old-k8s-version-977000 kubelet[13956]: I0224 01:49:14.428034   13956 server.go:410] Version: v1.16.0
	Feb 24 01:49:14 old-k8s-version-977000 kubelet[13956]: I0224 01:49:14.428466   13956 plugins.go:100] No cloud provider specified.
	Feb 24 01:49:14 old-k8s-version-977000 kubelet[13956]: I0224 01:49:14.428504   13956 server.go:773] Client rotation is on, will bootstrap in background
	Feb 24 01:49:14 old-k8s-version-977000 kubelet[13956]: I0224 01:49:14.430458   13956 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 01:49:14 old-k8s-version-977000 kubelet[13956]: W0224 01:49:14.431118   13956 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 24 01:49:14 old-k8s-version-977000 kubelet[13956]: W0224 01:49:14.431214   13956 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 24 01:49:14 old-k8s-version-977000 kubelet[13956]: F0224 01:49:14.431244   13956 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 24 01:49:14 old-k8s-version-977000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 01:49:14 old-k8s-version-977000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 01:49:15 old-k8s-version-977000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Feb 24 01:49:15 old-k8s-version-977000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 01:49:15 old-k8s-version-977000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 01:49:15 old-k8s-version-977000 kubelet[13968]: I0224 01:49:15.180997   13968 server.go:410] Version: v1.16.0
	Feb 24 01:49:15 old-k8s-version-977000 kubelet[13968]: I0224 01:49:15.181287   13968 plugins.go:100] No cloud provider specified.
	Feb 24 01:49:15 old-k8s-version-977000 kubelet[13968]: I0224 01:49:15.181298   13968 server.go:773] Client rotation is on, will bootstrap in background
	Feb 24 01:49:15 old-k8s-version-977000 kubelet[13968]: I0224 01:49:15.183081   13968 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 01:49:15 old-k8s-version-977000 kubelet[13968]: W0224 01:49:15.183865   13968 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 24 01:49:15 old-k8s-version-977000 kubelet[13968]: W0224 01:49:15.183936   13968 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 24 01:49:15 old-k8s-version-977000 kubelet[13968]: F0224 01:49:15.183963   13968 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 24 01:49:15 old-k8s-version-977000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 01:49:15 old-k8s-version-977000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:49:15.532904   44488 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 2 (399.644938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-977000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (496.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:49:29.952150   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:49:33.400089   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
E0223 17:49:35.629369   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:49:44.880894   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:49:53.436141   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:50:22.418122   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:51:16.485355   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
E0223 17:51:17.957141   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:51:22.179162   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:51:24.168441   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:51:36.993210   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:51:45.641744   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:52:47.217593   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:52:59.152748   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:53:00.104692   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:53:08.369065   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:53:12.587832   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:53:25.473017   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:53:55.816241   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:54:22.197853   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:54:29.959398   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:54:31.412753   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:54:33.406801   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:54:44.885640   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:54:53.442702   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:55:18.873452   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 17:55:22.424843   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:55:53.009944   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:55:56.458964   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:56:17.963759   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:56:22.186739   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:56:24.175073   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:56:36.999818   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:57:59.158499   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:58:08.374153   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:58:12.594756   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 2 (394.104207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-977000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-977000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-977000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c",
	        "Created": "2023-02-24T01:35:14.380880766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 680996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:41:01.55601187Z",
	            "FinishedAt": "2023-02-24T01:40:58.627805576Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hostname",
	        "HostsPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hosts",
	        "LogPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c-json.log",
	        "Name": "/old-k8s-version-977000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-977000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-977000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-977000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-977000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-977000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8291751a84c1dddad11fd7ac12404858ad006f75c5dafa636657a7c0e1ee1362",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61774"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61775"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61776"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61772"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61773"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8291751a84c1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-977000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "821104c1f595",
	                        "old-k8s-version-977000"
	                    ],
	                    "NetworkID": "b62760f23c198a189d083cf13177930b5af19fc8f1e171a0fb08c0832e6d6e8a",
	                    "EndpointID": "9f237184db045478e6ca36e5f258df6af6202dc7e10c51e247336bee184a0143",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 2 (407.844781ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-977000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-977000 logs -n 25: (3.42057437s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-977000   | old-k8s-version-977000       | jenkins | v1.29.0 | 23 Feb 23 17:39 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-977000                         | old-k8s-version-977000       | jenkins | v1.29.0 | 23 Feb 23 17:40 PST | 23 Feb 23 17:40 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-977000        | old-k8s-version-977000       | jenkins | v1.29.0 | 23 Feb 23 17:40 PST | 23 Feb 23 17:41 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-977000                         | old-k8s-version-977000       | jenkins | v1.29.0 | 23 Feb 23 17:41 PST |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-732000 sudo                         | no-preload-732000            | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-732000                              | no-preload-732000            | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-732000                              | no-preload-732000            | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-732000                              | no-preload-732000            | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	| delete  | -p no-preload-732000                              | no-preload-732000            | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	| start   | -p embed-certs-309000                             | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309000       | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:47 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309000                             | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:47 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309000            | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:47 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309000                             | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:56 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-309000 sudo                        | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-309000                             | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-309000                             | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-309000                             | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	| delete  | -p embed-certs-309000                             | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	| delete  | -p                                                | disable-driver-mounts-718000 | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	|         | disable-driver-mounts-718000                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:57 PST |
	|         | default-k8s-diff-port-763000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:57 PST | 23 Feb 23 17:57 PST |
	|         | default-k8s-diff-port-763000                      |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:57 PST | 23 Feb 23 17:57 PST |
	|         | default-k8s-diff-port-763000                      |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-763000  | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:57 PST | 23 Feb 23 17:58 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:58 PST |                     |
	|         | default-k8s-diff-port-763000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 17:58:00
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 17:58:00.153987   45324 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:58:00.154183   45324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:58:00.154189   45324 out.go:309] Setting ErrFile to fd 2...
	I0223 17:58:00.154193   45324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:58:00.154317   45324 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:58:00.156028   45324 out.go:303] Setting JSON to false
	I0223 17:58:00.175566   45324 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10655,"bootTime":1677193225,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 17:58:00.175640   45324 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 17:58:00.196652   45324 out.go:177] * [default-k8s-diff-port-763000] minikube v1.29.0 on Darwin 13.2
	I0223 17:58:00.218097   45324 notify.go:220] Checking for updates...
	I0223 17:58:00.239859   45324 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 17:58:00.260887   45324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:58:00.282057   45324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 17:58:00.304078   45324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 17:58:00.327882   45324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 17:58:00.347807   45324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 17:58:00.369280   45324 config.go:182] Loaded profile config "default-k8s-diff-port-763000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:58:00.369747   45324 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 17:58:00.430736   45324 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 17:58:00.430863   45324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:58:00.573004   45324 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:58:00.480132332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:58:00.615726   45324 out.go:177] * Using the docker driver based on existing profile
	I0223 17:58:00.637582   45324 start.go:296] selected driver: docker
	I0223 17:58:00.637608   45324 start.go:857] validating driver "docker" against &{Name:default-k8s-diff-port-763000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-763000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:58:00.637723   45324 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 17:58:00.641608   45324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 17:58:00.783196   45324 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 01:58:00.690775464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 17:58:00.783374   45324 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 17:58:00.783395   45324 cni.go:84] Creating CNI manager for ""
	I0223 17:58:00.783407   45324 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 17:58:00.783418   45324 start_flags.go:319] config:
	{Name:default-k8s-diff-port-763000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-763000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:58:00.827073   45324 out.go:177] * Starting control plane node default-k8s-diff-port-763000 in cluster default-k8s-diff-port-763000
	I0223 17:58:00.848928   45324 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 17:58:00.870937   45324 out.go:177] * Pulling base image ...
	I0223 17:58:00.912753   45324 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:58:00.912788   45324 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 17:58:00.912823   45324 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 17:58:00.912843   45324 cache.go:57] Caching tarball of preloaded images
	I0223 17:58:00.912967   45324 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 17:58:00.912977   45324 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 17:58:00.913580   45324 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/config.json ...
	I0223 17:58:00.968712   45324 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 17:58:00.968738   45324 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 17:58:00.968757   45324 cache.go:193] Successfully downloaded all kic artifacts
	I0223 17:58:00.968806   45324 start.go:364] acquiring machines lock for default-k8s-diff-port-763000: {Name:mk9db145408bd6732b49a2ccc02df80a4bb35ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 17:58:00.968892   45324 start.go:368] acquired machines lock for "default-k8s-diff-port-763000" in 67.164µs
	I0223 17:58:00.968918   45324 start.go:96] Skipping create...Using existing machine configuration
	I0223 17:58:00.968927   45324 fix.go:55] fixHost starting: 
	I0223 17:58:00.969172   45324 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-763000 --format={{.State.Status}}
	I0223 17:58:01.025164   45324 fix.go:103] recreateIfNeeded on default-k8s-diff-port-763000: state=Stopped err=<nil>
	W0223 17:58:01.025215   45324 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 17:58:01.069137   45324 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-763000" ...
	I0223 17:58:01.090919   45324 cli_runner.go:164] Run: docker start default-k8s-diff-port-763000
	I0223 17:58:01.433632   45324 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-763000 --format={{.State.Status}}
	I0223 17:58:01.493794   45324 kic.go:426] container "default-k8s-diff-port-763000" state is running.
	I0223 17:58:01.494382   45324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-763000
	I0223 17:58:01.559251   45324 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/config.json ...
	I0223 17:58:01.559768   45324 machine.go:88] provisioning docker machine ...
	I0223 17:58:01.559836   45324 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-763000"
	I0223 17:58:01.559955   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:01.632503   45324 main.go:141] libmachine: Using SSH client type: native
	I0223 17:58:01.632918   45324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62612 <nil> <nil>}
	I0223 17:58:01.632940   45324 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-763000 && echo "default-k8s-diff-port-763000" | sudo tee /etc/hostname
	I0223 17:58:01.788211   45324 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-763000
	
	I0223 17:58:01.788304   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:01.849907   45324 main.go:141] libmachine: Using SSH client type: native
	I0223 17:58:01.850274   45324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62612 <nil> <nil>}
	I0223 17:58:01.850292   45324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-763000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-763000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-763000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 17:58:01.985289   45324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:58:01.985314   45324 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 17:58:01.985334   45324 ubuntu.go:177] setting up certificates
	I0223 17:58:01.985342   45324 provision.go:83] configureAuth start
	I0223 17:58:01.985426   45324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-763000
	I0223 17:58:02.041515   45324 provision.go:138] copyHostCerts
	I0223 17:58:02.041612   45324 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 17:58:02.041622   45324 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 17:58:02.041711   45324 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 17:58:02.041905   45324 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 17:58:02.041911   45324 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 17:58:02.041969   45324 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 17:58:02.042110   45324 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 17:58:02.042115   45324 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 17:58:02.042186   45324 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 17:58:02.042310   45324 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-763000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-763000]
	I0223 17:58:02.141317   45324 provision.go:172] copyRemoteCerts
	I0223 17:58:02.141370   45324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 17:58:02.141426   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:02.199515   45324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62612 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/default-k8s-diff-port-763000/id_rsa Username:docker}
	I0223 17:58:02.294072   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 17:58:02.311567   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0223 17:58:02.328852   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 17:58:02.346343   45324 provision.go:86] duration metric: configureAuth took 360.981459ms
	I0223 17:58:02.346357   45324 ubuntu.go:193] setting minikube options for container-runtime
	I0223 17:58:02.346506   45324 config.go:182] Loaded profile config "default-k8s-diff-port-763000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:58:02.346567   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:02.404415   45324 main.go:141] libmachine: Using SSH client type: native
	I0223 17:58:02.404773   45324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62612 <nil> <nil>}
	I0223 17:58:02.404791   45324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 17:58:02.543119   45324 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 17:58:02.543137   45324 ubuntu.go:71] root file system type: overlay
	I0223 17:58:02.543230   45324 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 17:58:02.543308   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:02.600968   45324 main.go:141] libmachine: Using SSH client type: native
	I0223 17:58:02.601312   45324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62612 <nil> <nil>}
	I0223 17:58:02.601372   45324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 17:58:02.744656   45324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 17:58:02.744742   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:02.801398   45324 main.go:141] libmachine: Using SSH client type: native
	I0223 17:58:02.801772   45324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62612 <nil> <nil>}
	I0223 17:58:02.801785   45324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 17:58:02.940419   45324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 17:58:02.940436   45324 machine.go:91] provisioned docker machine in 1.380627096s
	I0223 17:58:02.940447   45324 start.go:300] post-start starting for "default-k8s-diff-port-763000" (driver="docker")
	I0223 17:58:02.940454   45324 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 17:58:02.940531   45324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 17:58:02.940594   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:02.998303   45324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62612 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/default-k8s-diff-port-763000/id_rsa Username:docker}
	I0223 17:58:03.093684   45324 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 17:58:03.097292   45324 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 17:58:03.097306   45324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 17:58:03.097313   45324 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 17:58:03.097318   45324 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 17:58:03.097326   45324 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 17:58:03.097430   45324 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 17:58:03.097587   45324 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 17:58:03.097755   45324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 17:58:03.105224   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:58:03.123431   45324 start.go:303] post-start completed in 182.968377ms
	I0223 17:58:03.123513   45324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:58:03.123566   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:03.181482   45324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62612 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/default-k8s-diff-port-763000/id_rsa Username:docker}
	I0223 17:58:03.275618   45324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 17:58:03.280428   45324 fix.go:57] fixHost completed within 2.311449263s
	I0223 17:58:03.280443   45324 start.go:83] releasing machines lock for "default-k8s-diff-port-763000", held for 2.31149293s
	I0223 17:58:03.280560   45324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-763000
	I0223 17:58:03.337376   45324 ssh_runner.go:195] Run: cat /version.json
	I0223 17:58:03.337393   45324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 17:58:03.337448   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:03.337461   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:03.397970   45324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62612 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/default-k8s-diff-port-763000/id_rsa Username:docker}
	I0223 17:58:03.398068   45324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62612 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/default-k8s-diff-port-763000/id_rsa Username:docker}
	I0223 17:58:03.491885   45324 ssh_runner.go:195] Run: systemctl --version
	I0223 17:58:03.543443   45324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 17:58:03.549744   45324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 17:58:03.565828   45324 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 17:58:03.565913   45324 ssh_runner.go:195] Run: which cri-dockerd
	I0223 17:58:03.569980   45324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 17:58:03.577419   45324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 17:58:03.590094   45324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 17:58:03.597642   45324 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0223 17:58:03.597655   45324 start.go:485] detecting cgroup driver to use...
	I0223 17:58:03.597666   45324 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:58:03.597730   45324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:58:03.610767   45324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 17:58:03.619242   45324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 17:58:03.627784   45324 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 17:58:03.627848   45324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 17:58:03.636664   45324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:58:03.645015   45324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 17:58:03.653405   45324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 17:58:03.662113   45324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 17:58:03.670107   45324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 17:58:03.678480   45324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 17:58:03.685749   45324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 17:58:03.692835   45324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:58:03.766869   45324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 17:58:03.836315   45324 start.go:485] detecting cgroup driver to use...
	I0223 17:58:03.836335   45324 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 17:58:03.836401   45324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 17:58:03.846800   45324 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 17:58:03.846868   45324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 17:58:03.857024   45324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 17:58:03.873601   45324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 17:58:03.980348   45324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 17:58:04.096906   45324 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 17:58:04.096925   45324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 17:58:04.110512   45324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:58:04.197959   45324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 17:58:04.450085   45324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:58:04.518623   45324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 17:58:04.594442   45324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 17:58:04.675783   45324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 17:58:04.748593   45324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 17:58:04.760583   45324 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 17:58:04.760667   45324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 17:58:04.764939   45324 start.go:553] Will wait 60s for crictl version
	I0223 17:58:04.764983   45324 ssh_runner.go:195] Run: which crictl
	I0223 17:58:04.768923   45324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 17:58:04.875814   45324 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 17:58:04.875897   45324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:58:04.900764   45324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 17:58:04.946415   45324 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 17:58:04.946573   45324 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-763000 dig +short host.docker.internal
	I0223 17:58:05.063526   45324 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 17:58:05.063645   45324 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 17:58:05.068338   45324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:58:05.079379   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:05.138380   45324 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 17:58:05.138463   45324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:58:05.158597   45324 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 17:58:05.176042   45324 docker.go:560] Images already preloaded, skipping extraction
	I0223 17:58:05.176198   45324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 17:58:05.197510   45324 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 17:58:05.197529   45324 cache_images.go:84] Images are preloaded, skipping loading
	I0223 17:58:05.197609   45324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 17:58:05.223695   45324 cni.go:84] Creating CNI manager for ""
	I0223 17:58:05.223712   45324 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 17:58:05.223729   45324 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 17:58:05.223747   45324 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-763000 NodeName:default-k8s-diff-port-763000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 17:58:05.223858   45324 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-763000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 17:58:05.223926   45324 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-763000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-763000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0223 17:58:05.223972   45324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 17:58:05.234937   45324 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 17:58:05.235005   45324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 17:58:05.242785   45324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (460 bytes)
	I0223 17:58:05.255854   45324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 17:58:05.268899   45324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0223 17:58:05.281952   45324 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0223 17:58:05.285819   45324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 17:58:05.295534   45324 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000 for IP: 192.168.67.2
	I0223 17:58:05.295552   45324 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:58:05.295701   45324 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 17:58:05.295759   45324 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 17:58:05.295861   45324 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.key
	I0223 17:58:05.295927   45324 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/apiserver.key.c7fa3a9e
	I0223 17:58:05.295980   45324 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/proxy-client.key
	I0223 17:58:05.296189   45324 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 17:58:05.296229   45324 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 17:58:05.296240   45324 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 17:58:05.296279   45324 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 17:58:05.296313   45324 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 17:58:05.296342   45324 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 17:58:05.296418   45324 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 17:58:05.296955   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 17:58:05.314525   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 17:58:05.331785   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 17:58:05.349174   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 17:58:05.366921   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 17:58:05.385134   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 17:58:05.405615   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 17:58:05.425322   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 17:58:05.443818   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 17:58:05.461844   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 17:58:05.481434   45324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 17:58:05.499190   45324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 17:58:05.512354   45324 ssh_runner.go:195] Run: openssl version
	I0223 17:58:05.517939   45324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 17:58:05.526299   45324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:58:05.530280   45324 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:58:05.530325   45324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 17:58:05.535795   45324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 17:58:05.543742   45324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 17:58:05.551842   45324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 17:58:05.555911   45324 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 17:58:05.555958   45324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 17:58:05.561679   45324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 17:58:05.569210   45324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 17:58:05.577446   45324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 17:58:05.581493   45324 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 17:58:05.581538   45324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 17:58:05.586883   45324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 17:58:05.594660   45324 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-763000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-763000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 17:58:05.594780   45324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:58:05.614852   45324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 17:58:05.623228   45324 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 17:58:05.623243   45324 kubeadm.go:633] restartCluster start
	I0223 17:58:05.623298   45324 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 17:58:05.630562   45324 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:05.630631   45324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-763000
	I0223 17:58:05.688938   45324 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-763000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 17:58:05.689105   45324 kubeconfig.go:146] "default-k8s-diff-port-763000" context is missing from /Users/jenkins/minikube-integration/15909-24428/kubeconfig - will repair!
	I0223 17:58:05.689461   45324 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/kubeconfig: {Name:mk7d15723b32e59bb8ea0777461e49fb0d77cb39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 17:58:05.691076   45324 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 17:58:05.699258   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:05.699336   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:05.708626   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:06.208925   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:06.209097   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:06.218704   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:06.710794   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:06.711047   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:06.722334   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:07.208992   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:07.209114   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:07.220419   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:07.709800   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:07.709926   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:07.719349   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:08.210875   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:08.210979   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:08.221580   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:08.710387   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:08.710632   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:08.721874   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:09.210809   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:09.210937   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:09.223161   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:09.709080   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:09.709273   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:09.721293   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:10.208953   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:10.209077   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:10.220033   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:10.709940   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:10.710114   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:10.721396   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:11.209521   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:11.209692   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:11.221021   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:11.709502   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:11.709652   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:11.720776   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:12.209839   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:12.209969   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:12.219628   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:12.709958   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:12.710111   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:12.721234   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:13.210909   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:13.211176   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:13.222466   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:13.709412   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:13.709527   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:13.719955   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:14.209310   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:14.209553   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:14.221643   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:14.708922   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:14.709103   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:14.719058   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:15.209882   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:15.209990   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:15.220772   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:15.710477   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:15.710727   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:15.721558   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:15.721572   45324 api_server.go:165] Checking apiserver status ...
	I0223 17:58:15.721624   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 17:58:15.730071   45324 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:15.730083   45324 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 17:58:15.730091   45324 kubeadm.go:1120] stopping kube-system containers ...
	I0223 17:58:15.730158   45324 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 17:58:15.750791   45324 docker.go:456] Stopping containers: [0e035b5ab221 6c1aaf1c82c6 98045f9864a3 0579df686e36 b18be6f4e206 493cc123ec7f df3453f7a3bf 6281b19faccf 18aa2a031466 a6f587a5554c e0964737a4a1 6d6f6f3d5d7b 10e6e25b3a22 d923bdb5bcca 16489037bb25 902f3d0dc923]
	I0223 17:58:15.750882   45324 ssh_runner.go:195] Run: docker stop 0e035b5ab221 6c1aaf1c82c6 98045f9864a3 0579df686e36 b18be6f4e206 493cc123ec7f df3453f7a3bf 6281b19faccf 18aa2a031466 a6f587a5554c e0964737a4a1 6d6f6f3d5d7b 10e6e25b3a22 d923bdb5bcca 16489037bb25 902f3d0dc923
	I0223 17:58:15.771533   45324 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 17:58:15.782148   45324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 17:58:15.789821   45324 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 24 01:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 24 01:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Feb 24 01:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 24 01:57 /etc/kubernetes/scheduler.conf
	
	I0223 17:58:15.789879   45324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0223 17:58:15.797382   45324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0223 17:58:15.804997   45324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0223 17:58:15.812312   45324 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:15.812364   45324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0223 17:58:15.819543   45324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0223 17:58:15.827145   45324 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:58:15.827194   45324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0223 17:58:15.834273   45324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 17:58:15.841938   45324 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 17:58:15.841952   45324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:58:15.895439   45324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:58:16.637298   45324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:58:16.777904   45324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:58:16.832487   45324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:58:16.953319   45324 api_server.go:51] waiting for apiserver process to appear ...
	I0223 17:58:16.953391   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:58:17.463193   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:58:17.963304   45324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:58:17.982346   45324 api_server.go:71] duration metric: took 1.029008489s to wait for apiserver process to appear ...
	I0223 17:58:17.982360   45324 api_server.go:87] waiting for apiserver healthz status ...
	I0223 17:58:17.982378   45324 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62616/healthz ...
	I0223 17:58:17.983723   45324 api_server.go:268] stopped: https://127.0.0.1:62616/healthz: Get "https://127.0.0.1:62616/healthz": EOF
	I0223 17:58:18.485102   45324 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62616/healthz ...
	I0223 17:58:20.916083   45324 api_server.go:278] https://127.0.0.1:62616/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 17:58:20.916099   45324 api_server.go:102] status: https://127.0.0.1:62616/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 17:58:20.985082   45324 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62616/healthz ...
	I0223 17:58:20.992080   45324 api_server.go:278] https://127.0.0.1:62616/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 17:58:20.992099   45324 api_server.go:102] status: https://127.0.0.1:62616/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 17:58:21.484352   45324 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62616/healthz ...
	I0223 17:58:21.491506   45324 api_server.go:278] https://127.0.0.1:62616/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 17:58:21.491521   45324 api_server.go:102] status: https://127.0.0.1:62616/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 17:58:21.984216   45324 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62616/healthz ...
	I0223 17:58:21.989643   45324 api_server.go:278] https://127.0.0.1:62616/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 17:58:21.989658   45324 api_server.go:102] status: https://127.0.0.1:62616/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 17:58:22.483944   45324 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62616/healthz ...
	I0223 17:58:22.490043   45324 api_server.go:278] https://127.0.0.1:62616/healthz returned 200:
	ok
	I0223 17:58:22.497271   45324 api_server.go:140] control plane version: v1.26.1
	I0223 17:58:22.497287   45324 api_server.go:130] duration metric: took 4.51482036s to wait for apiserver health ...
	I0223 17:58:22.497294   45324 cni.go:84] Creating CNI manager for ""
	I0223 17:58:22.497306   45324 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 17:58:22.519133   45324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 17:58:22.556793   45324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 17:58:22.567344   45324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 17:58:22.582929   45324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 17:58:22.590658   45324 system_pods.go:59] 8 kube-system pods found
	I0223 17:58:22.590680   45324 system_pods.go:61] "coredns-787d4945fb-7wfgg" [b54ffc32-b7f7-4099-82d1-c6a5bcc9f043] Running
	I0223 17:58:22.590685   45324 system_pods.go:61] "etcd-default-k8s-diff-port-763000" [8f1e407d-d514-4490-bcd7-543b56ac8c63] Running
	I0223 17:58:22.590689   45324 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-763000" [6b7308b7-3f58-4063-8c0e-6115abd038be] Running
	I0223 17:58:22.590694   45324 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-763000" [28bd4d55-624f-406f-be4f-a00cad67fa11] Running
	I0223 17:58:22.590699   45324 system_pods.go:61] "kube-proxy-fkw4f" [0c7642ff-c831-487a-b0f1-d52c428eebae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0223 17:58:22.590704   45324 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-763000" [28d82061-1a7e-4589-8620-0f588569dc77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 17:58:22.590709   45324 system_pods.go:61] "metrics-server-7997d45854-4zb99" [9f67bcb3-ce8f-41f4-a594-8669b13d822f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 17:58:22.590714   45324 system_pods.go:61] "storage-provisioner" [0ab57057-23d4-456c-a522-079ee5cbd157] Running
	I0223 17:58:22.590718   45324 system_pods.go:74] duration metric: took 7.778563ms to wait for pod list to return data ...
	I0223 17:58:22.590725   45324 node_conditions.go:102] verifying NodePressure condition ...
	I0223 17:58:22.594154   45324 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0223 17:58:22.594172   45324 node_conditions.go:123] node cpu capacity is 6
	I0223 17:58:22.594185   45324 node_conditions.go:105] duration metric: took 3.455812ms to run NodePressure ...
	I0223 17:58:22.594199   45324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 17:58:22.972892   45324 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0223 17:58:22.978756   45324 kubeadm.go:784] kubelet initialised
	I0223 17:58:22.978771   45324 kubeadm.go:785] duration metric: took 5.846551ms waiting for restarted kubelet to initialise ...
	I0223 17:58:22.978778   45324 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 17:58:22.985091   45324 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-7wfgg" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:22.992834   45324 pod_ready.go:92] pod "coredns-787d4945fb-7wfgg" in "kube-system" namespace has status "Ready":"True"
	I0223 17:58:22.992846   45324 pod_ready.go:81] duration metric: took 7.742541ms waiting for pod "coredns-787d4945fb-7wfgg" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:22.992853   45324 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-763000" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:23.053723   45324 pod_ready.go:92] pod "etcd-default-k8s-diff-port-763000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:58:23.053735   45324 pod_ready.go:81] duration metric: took 60.876096ms waiting for pod "etcd-default-k8s-diff-port-763000" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:23.053742   45324 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-763000" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:23.060489   45324 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-763000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:58:23.060503   45324 pod_ready.go:81] duration metric: took 6.755844ms waiting for pod "kube-apiserver-default-k8s-diff-port-763000" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:23.060512   45324 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-763000" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:23.068599   45324 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-763000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:58:23.068612   45324 pod_ready.go:81] duration metric: took 8.092317ms waiting for pod "kube-controller-manager-default-k8s-diff-port-763000" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:23.068621   45324 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fkw4f" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:25.403034   45324 pod_ready.go:102] pod "kube-proxy-fkw4f" in "kube-system" namespace has status "Ready":"False"
	I0223 17:58:26.399855   45324 pod_ready.go:92] pod "kube-proxy-fkw4f" in "kube-system" namespace has status "Ready":"True"
	I0223 17:58:26.399869   45324 pod_ready.go:81] duration metric: took 3.33116846s waiting for pod "kube-proxy-fkw4f" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:26.399875   45324 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-763000" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:28.410846   45324 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-763000" in "kube-system" namespace has status "Ready":"False"
	I0223 17:58:30.411204   45324 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-763000" in "kube-system" namespace has status "Ready":"False"
	I0223 17:58:31.911324   45324 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-763000" in "kube-system" namespace has status "Ready":"True"
	I0223 17:58:31.911337   45324 pod_ready.go:81] duration metric: took 5.511334701s waiting for pod "kube-scheduler-default-k8s-diff-port-763000" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:31.911344   45324 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-4zb99" in "kube-system" namespace to be "Ready" ...
	I0223 17:58:33.921130   45324 pod_ready.go:102] pod "metrics-server-7997d45854-4zb99" in "kube-system" namespace has status "Ready":"False"
	I0223 17:58:35.923856   45324 pod_ready.go:102] pod "metrics-server-7997d45854-4zb99" in "kube-system" namespace has status "Ready":"False"
	I0223 17:58:37.925507   45324 pod_ready.go:102] pod "metrics-server-7997d45854-4zb99" in "kube-system" namespace has status "Ready":"False"
	I0223 17:58:40.425222   45324 pod_ready.go:102] pod "metrics-server-7997d45854-4zb99" in "kube-system" namespace has status "Ready":"False"
	I0223 17:58:42.921642   45324 pod_ready.go:102] pod "metrics-server-7997d45854-4zb99" in "kube-system" namespace has status "Ready":"False"
	I0223 17:58:44.923363   45324 pod_ready.go:102] pod "metrics-server-7997d45854-4zb99" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 01:41:01 UTC, end at Fri 2023-02-24 01:58:48 UTC. --
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.533706459Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534051984Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534106084Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534921800Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534961430Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534980167Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534989586Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535101973Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535158123Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535182760Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535229638Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535302939Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535357417Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535589749Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535654552Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.536109842Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.624542375Z" level=info msg="Loading containers: start."
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.705830368Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.738778244Z" level=info msg="Loading containers: done."
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.747281313Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.747342089Z" level=info msg="Daemon has completed initialization"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.769248376Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.772753001Z" level=info msg="API listen on [::]:2376"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.779603495Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-02-24T01:58:50Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  01:58:50 up  2:58,  0 users,  load average: 0.76, 0.86, 1.07
	Linux old-k8s-version-977000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 01:41:01 UTC, end at Fri 2023-02-24 01:58:50 UTC. --
	Feb 24 01:58:48 old-k8s-version-977000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 01:58:49 old-k8s-version-977000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 927.
	Feb 24 01:58:49 old-k8s-version-977000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 01:58:49 old-k8s-version-977000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 01:58:49 old-k8s-version-977000 kubelet[24119]: I0224 01:58:49.691196   24119 server.go:410] Version: v1.16.0
	Feb 24 01:58:49 old-k8s-version-977000 kubelet[24119]: I0224 01:58:49.691720   24119 plugins.go:100] No cloud provider specified.
	Feb 24 01:58:49 old-k8s-version-977000 kubelet[24119]: I0224 01:58:49.691755   24119 server.go:773] Client rotation is on, will bootstrap in background
	Feb 24 01:58:49 old-k8s-version-977000 kubelet[24119]: I0224 01:58:49.693476   24119 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 01:58:49 old-k8s-version-977000 kubelet[24119]: W0224 01:58:49.695974   24119 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 24 01:58:49 old-k8s-version-977000 kubelet[24119]: W0224 01:58:49.696040   24119 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 24 01:58:49 old-k8s-version-977000 kubelet[24119]: F0224 01:58:49.696063   24119 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 24 01:58:49 old-k8s-version-977000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 01:58:49 old-k8s-version-977000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 01:58:50 old-k8s-version-977000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Feb 24 01:58:50 old-k8s-version-977000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 01:58:50 old-k8s-version-977000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 01:58:50 old-k8s-version-977000 kubelet[24152]: I0224 01:58:50.438565   24152 server.go:410] Version: v1.16.0
	Feb 24 01:58:50 old-k8s-version-977000 kubelet[24152]: I0224 01:58:50.438739   24152 plugins.go:100] No cloud provider specified.
	Feb 24 01:58:50 old-k8s-version-977000 kubelet[24152]: I0224 01:58:50.438748   24152 server.go:773] Client rotation is on, will bootstrap in background
	Feb 24 01:58:50 old-k8s-version-977000 kubelet[24152]: I0224 01:58:50.440448   24152 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 01:58:50 old-k8s-version-977000 kubelet[24152]: W0224 01:58:50.441248   24152 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 24 01:58:50 old-k8s-version-977000 kubelet[24152]: W0224 01:58:50.441319   24152 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 24 01:58:50 old-k8s-version-977000 kubelet[24152]: F0224 01:58:50.441343   24152 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 24 01:58:50 old-k8s-version-977000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 01:58:50 old-k8s-version-977000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:58:50.315302   45452 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 2 (411.074552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-977000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0223 17:58:55.836286   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:59:29.994842   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
E0223 17:59:33.443561   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:59:44.921105   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 17:59:53.478919   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:00:22.460448   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:01:07.979380   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:01:18.000590   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 18:01:22.224466   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 18:01:24.212026   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:01:37.036484   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:02:41.049344   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:02:59.195631   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:03:08.411547   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 18:03:12.632213   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:03:55.860992   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:04:25.274839   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 18:04:30.003637   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kubenet-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:04:33.453380   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/bridge-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:04:44.931682   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 18:04:53.487878   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61773/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:05:22.468314   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:06:15.686135   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:06:18.009857   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:06:22.231767   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:06:24.221600   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:06:37.047357   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:07:38.742848   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
E0223 18:07:38.748070   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
E0223 18:07:38.759436   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
E0223 18:07:38.779561   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
E0223 18:07:38.820198   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
E0223 18:07:38.900535   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
E0223 18:07:39.060824   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:07:39.382607   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
E0223 18:07:40.023758   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:07:41.304020   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:07:43.864234   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:07:48.985710   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:07:56.541902   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 18:07:59.205885   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 18:07:59.226945   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 2 (421.837947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-977000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-977000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-977000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.593µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-977000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-977000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-977000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c",
	        "Created": "2023-02-24T01:35:14.380880766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 680996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T01:41:01.55601187Z",
	            "FinishedAt": "2023-02-24T01:40:58.627805576Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hostname",
	        "HostsPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/hosts",
	        "LogPath": "/var/lib/docker/containers/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c/821104c1f595e8ccff9d72d82f90c0635818dc405623f087f556efcb01cc496c-json.log",
	        "Name": "/old-k8s-version-977000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-977000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-977000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a-init/diff:/var/lib/docker/overlay2/e9788f80167b86b0aa5b795f2f3d600802b93fba8a2122959b7f1a31193d259a/diff:/var/lib/docker/overlay2/3311b08ed6fb82a6db41888762e507d536e3b26d2b03541b6a4147bee9325b04/diff:/var/lib/docker/overlay2/d791ec7693f308a6f9d9c24f7121f72e259d6a91f76cb17bb6aa879ac0aaf9e7/diff:/var/lib/docker/overlay2/68f7de26db177ab2eeec7f9dbb42f6eda85ec36655cbdfed5ee15aba180eb346/diff:/var/lib/docker/overlay2/f44dd4a4bdd627648a1d240924fcb811136db11f9c23529924b5eb0ad38b341d/diff:/var/lib/docker/overlay2/62fad3781f53b4c81d63dfbeb52c72fd61caa73bafe8d8765482ced0b72e52b5/diff:/var/lib/docker/overlay2/0365eeefae207274beb52bb15213d5ad6afabdfa6d616f6a50fd835379a6e1d9/diff:/var/lib/docker/overlay2/af460618566d5be40a633830ec0cca15769c940e94c046541f4a327299f97665/diff:/var/lib/docker/overlay2/4c27ffc65a78b664cce78bc88f4bd2dd69a17ec0e06fcf2700affcab1297be85/diff:/var/lib/docker/overlay2/e5792b
fb35e95c8e01826dcc8dfcd7a6f67883c9cef1bfcce8ae6a6e575de809/diff:/var/lib/docker/overlay2/495c698be46c617acf68fa28ebe4664feb6edb806c90a3c88c8fc3311d2fff64/diff:/var/lib/docker/overlay2/0db5db3040241866674245317d2a38eb48acdd50c6b48b8feb278ee40ab20e13/diff:/var/lib/docker/overlay2/c302751df21df9b1af14da38e239c038d79560c4a9daa0e4e3af41516c22bb8e/diff:/var/lib/docker/overlay2/651c97220cad2e4a3e41428005d1d9376b60523dc754410b324390527fab638b/diff:/var/lib/docker/overlay2/ffeb543640423404b334d309a5c4e31596a2752398132b7f4d271f82d44b7c99/diff:/var/lib/docker/overlay2/3bad591675b09dceceb973e57645a11765029010b632167351246799a7133e16/diff:/var/lib/docker/overlay2/5d3e5b2318b8a82a24c46f96b95b036cda2ece4fa0e95188ae19124d588a5ca3/diff:/var/lib/docker/overlay2/a265e8ad9fc624a094ebc7ca4bdf43fdb2c52c8ba3898d4f2c32e84befe7318d/diff:/var/lib/docker/overlay2/346c14b8ef206041c9e466c112208098b1ebe602d9ac6094edc575a3425fa283/diff:/var/lib/docker/overlay2/f5a551deafb80cde81876b6c1f2e741884a1cf1cdad3fb5d6ffab113f31e4a0a/diff:/var/lib/d
ocker/overlay2/8db1dfd60158d30288740d04e98225a6a77b9f497fbc6370795f32d48210529c/diff:/var/lib/docker/overlay2/b9f3f954350246b0dad2d56d0aaec84a79c9be55d927d07f4161981ae43d0ace/diff:/var/lib/docker/overlay2/efd37a9d5708c937817b9b909ff0f80e5992b2b4a5237e6395dd3aeb837ed3ff/diff:/var/lib/docker/overlay2/ee9d652a4f5cf4055607be0c895984aa253e4b5c6dd00414087ccdedd5329759/diff:/var/lib/docker/overlay2/9bebcd890fe3dd6c34e8793f81aa69fe76a693e4c7420d26948bc60054d3861e/diff:/var/lib/docker/overlay2/bcbf2175a046d7d3541babcb252fa51c3abb433fb894ff21c8cda4dc51df3342/diff:/var/lib/docker/overlay2/7e8d9a7a9617277b163113b911112644d8d12676a1dab919f15589cd82f326f6/diff:/var/lib/docker/overlay2/940f18fbfcb7ca9943941561a17fdf3ff24754a5a3b232c5096cee47c3dc686d/diff:/var/lib/docker/overlay2/706967ddab541e8e1c0ba8458a02526f8c6fdd7e22d9fdc2709f3f8091926e70/diff:/var/lib/docker/overlay2/89624b64a8c5f018614f425104936164ee53bd0d89163c1e2bb5ca591b7ea41f/diff:/var/lib/docker/overlay2/04cbeaf84433f334943d8d99476d687f35576e2f1bb2b9485f6406b0501
58eaf/diff:/var/lib/docker/overlay2/bb646746ad25b161f2ffecc9a438a557fff814a059e3d4fbb51f99af03e2b084/diff:/var/lib/docker/overlay2/2592092232a2ae41548bbdbc384ae0d335e680e68ba38cbe462b3dc648c2c3b4/diff:/var/lib/docker/overlay2/e3d395e2c69e44157aa14eedcc88e174800ec7eb6658ea0591ab50961cd89577/diff:/var/lib/docker/overlay2/660018fe497ca74325f295cce62d29ce6fc01008cd1529da22e979e3e8d87582/diff:/var/lib/docker/overlay2/8490879fff1ac447e19bda3843b477926744c3d2a8bc0a22264ae9b30c1ba386/diff:/var/lib/docker/overlay2/3cb21451fde458b4258fdb2d67f2a8557d9f550b0c6de008eda8092896cb10fa/diff:/var/lib/docker/overlay2/6d00d6a43bfd7d84f5c225c980f2c0a77ab7a60911980c6fb6369e860e1e2649/diff:/var/lib/docker/overlay2/4592e35c9c34141b86e00739ab5c223ec86f25dab6561a52179dc171be01eb12/diff:/var/lib/docker/overlay2/52c10afcc404325c79396d496762927c9351c2f6b462fb6e02892d4b489d2725/diff:/var/lib/docker/overlay2/20ec2a2a295d10c67a08e8a7263279eb08ed879417c5a36d41c78ea9dcc0cefb/diff:/var/lib/docker/overlay2/ba42e575768c7e5e87f54e82ba3e2a318e3271
bdfa23bce99c7637ebefc0d0ba/diff",
	                "MergedDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/343c1e867eb65b62e125ebf4bcf5c863b2836597e553ec5b02be7ee9848cd16a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-977000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-977000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-977000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-977000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8291751a84c1dddad11fd7ac12404858ad006f75c5dafa636657a7c0e1ee1362",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61774"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61775"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61776"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61772"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61773"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8291751a84c1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-977000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "821104c1f595",
	                        "old-k8s-version-977000"
	                    ],
	                    "NetworkID": "b62760f23c198a189d083cf13177930b5af19fc8f1e171a0fb08c0832e6d6e8a",
	                    "EndpointID": "9f237184db045478e6ca36e5f258df6af6202dc7e10c51e247336bee184a0143",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 2 (438.431822ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-977000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-977000 logs -n 25: (3.816227452s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p no-preload-732000                                 | no-preload-732000            | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-732000                                 | no-preload-732000            | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	| delete  | -p no-preload-732000                                 | no-preload-732000            | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	| start   | -p embed-certs-309000                                | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:46 PST | 23 Feb 23 17:46 PST |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309000          | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:47 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309000                                | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:47 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309000               | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:47 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p embed-certs-309000                                | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:47 PST | 23 Feb 23 17:56 PST |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-309000 sudo                           | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p embed-certs-309000                                | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p embed-certs-309000                                | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-309000                                | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	| delete  | -p embed-certs-309000                                | embed-certs-309000           | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	| delete  | -p                                                   | disable-driver-mounts-718000 | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:56 PST |
	|         | disable-driver-mounts-718000                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:56 PST | 23 Feb 23 17:57 PST |
	|         | default-k8s-diff-port-763000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:57 PST | 23 Feb 23 17:57 PST |
	|         | default-k8s-diff-port-763000                         |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:57 PST | 23 Feb 23 17:57 PST |
	|         | default-k8s-diff-port-763000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-763000     | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:57 PST | 23 Feb 23 17:58 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 17:58 PST | 23 Feb 23 18:07 PST |
	|         | default-k8s-diff-port-763000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 18:07 PST | 23 Feb 23 18:07 PST |
	|         | default-k8s-diff-port-763000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 18:07 PST | 23 Feb 23 18:07 PST |
	|         | default-k8s-diff-port-763000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 18:07 PST | 23 Feb 23 18:07 PST |
	|         | default-k8s-diff-port-763000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 18:07 PST | 23 Feb 23 18:07 PST |
	|         | default-k8s-diff-port-763000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-763000 | jenkins | v1.29.0 | 23 Feb 23 18:07 PST | 23 Feb 23 18:07 PST |
	|         | default-k8s-diff-port-763000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-277000 --memory=2200 --alsologtostderr | newest-cni-277000            | jenkins | v1.29.0 | 23 Feb 23 18:07 PST |                     |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 18:07:37
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 18:07:37.605595   46120 out.go:296] Setting OutFile to fd 1 ...
	I0223 18:07:37.605776   46120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 18:07:37.605781   46120 out.go:309] Setting ErrFile to fd 2...
	I0223 18:07:37.605785   46120 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 18:07:37.605888   46120 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 18:07:37.607295   46120 out.go:303] Setting JSON to false
	I0223 18:07:37.625728   46120 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11232,"bootTime":1677193225,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 18:07:37.625896   46120 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 18:07:37.647889   46120 out.go:177] * [newest-cni-277000] minikube v1.29.0 on Darwin 13.2
	I0223 18:07:37.689680   46120 notify.go:220] Checking for updates...
	I0223 18:07:37.689696   46120 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 18:07:37.710799   46120 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 18:07:37.731936   46120 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 18:07:37.752685   46120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 18:07:37.774042   46120 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 18:07:37.796140   46120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 18:07:37.818604   46120 config.go:182] Loaded profile config "old-k8s-version-977000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 18:07:37.818681   46120 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 18:07:37.880987   46120 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 18:07:37.881108   46120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 18:07:38.022752   46120 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 02:07:37.93100212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 18:07:38.066608   46120 out.go:177] * Using the docker driver based on user configuration
	I0223 18:07:38.088305   46120 start.go:296] selected driver: docker
	I0223 18:07:38.088367   46120 start.go:857] validating driver "docker" against <nil>
	I0223 18:07:38.088395   46120 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 18:07:38.092293   46120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 18:07:38.233836   46120 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 02:07:38.142038277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 18:07:38.233966   46120 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0223 18:07:38.233988   46120 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0223 18:07:38.234188   46120 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0223 18:07:38.277571   46120 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 18:07:38.298546   46120 cni.go:84] Creating CNI manager for ""
	I0223 18:07:38.298586   46120 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 18:07:38.298598   46120 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 18:07:38.298613   46120 start_flags.go:319] config:
	{Name:newest-cni-277000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-277000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 18:07:38.341591   46120 out.go:177] * Starting control plane node newest-cni-277000 in cluster newest-cni-277000
	I0223 18:07:38.363574   46120 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 18:07:38.385617   46120 out.go:177] * Pulling base image ...
	I0223 18:07:38.427255   46120 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 18:07:38.427298   46120 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 18:07:38.427319   46120 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 18:07:38.427332   46120 cache.go:57] Caching tarball of preloaded images
	I0223 18:07:38.427468   46120 preload.go:174] Found /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 18:07:38.427479   46120 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 18:07:38.428030   46120 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/config.json ...
	I0223 18:07:38.428128   46120 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/config.json: {Name:mke37d4c3b34ecc0128437c447de584eb5e47704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 18:07:38.483390   46120 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 18:07:38.483415   46120 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 18:07:38.483452   46120 cache.go:193] Successfully downloaded all kic artifacts
	I0223 18:07:38.483488   46120 start.go:364] acquiring machines lock for newest-cni-277000: {Name:mk3d9a1b0342f7ce18f003e5fe58fe835f10dd9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 18:07:38.483635   46120 start.go:368] acquired machines lock for "newest-cni-277000" in 134.886µs
	I0223 18:07:38.483664   46120 start.go:93] Provisioning new machine with config: &{Name:newest-cni-277000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-277000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 18:07:38.483721   46120 start.go:125] createHost starting for "" (driver="docker")
	I0223 18:07:38.505707   46120 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 18:07:38.505968   46120 start.go:159] libmachine.API.Create for "newest-cni-277000" (driver="docker")
	I0223 18:07:38.506002   46120 client.go:168] LocalClient.Create starting
	I0223 18:07:38.506133   46120 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem
	I0223 18:07:38.506178   46120 main.go:141] libmachine: Decoding PEM data...
	I0223 18:07:38.506197   46120 main.go:141] libmachine: Parsing certificate...
	I0223 18:07:38.506257   46120 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem
	I0223 18:07:38.506291   46120 main.go:141] libmachine: Decoding PEM data...
	I0223 18:07:38.506309   46120 main.go:141] libmachine: Parsing certificate...
	I0223 18:07:38.527819   46120 cli_runner.go:164] Run: docker network inspect newest-cni-277000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 18:07:38.583652   46120 cli_runner.go:211] docker network inspect newest-cni-277000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 18:07:38.583755   46120 network_create.go:281] running [docker network inspect newest-cni-277000] to gather additional debugging logs...
	I0223 18:07:38.583774   46120 cli_runner.go:164] Run: docker network inspect newest-cni-277000
	W0223 18:07:38.637789   46120 cli_runner.go:211] docker network inspect newest-cni-277000 returned with exit code 1
	I0223 18:07:38.637815   46120 network_create.go:284] error running [docker network inspect newest-cni-277000]: docker network inspect newest-cni-277000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-277000
	I0223 18:07:38.637833   46120 network_create.go:286] output of [docker network inspect newest-cni-277000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-277000
	
	** /stderr **
	I0223 18:07:38.637924   46120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 18:07:38.693491   46120 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 18:07:38.693829   46120 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000a21cf0}
	I0223 18:07:38.693847   46120 network_create.go:123] attempt to create docker network newest-cni-277000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 18:07:38.693916   46120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-277000 newest-cni-277000
	W0223 18:07:38.749198   46120 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-277000 newest-cni-277000 returned with exit code 1
	W0223 18:07:38.749231   46120 network_create.go:148] failed to create docker network newest-cni-277000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-277000 newest-cni-277000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 18:07:38.749244   46120 network_create.go:115] failed to create docker network newest-cni-277000 192.168.58.0/24, will retry: subnet is taken
	I0223 18:07:38.750546   46120 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 18:07:38.750885   46120 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000a45c30}
	I0223 18:07:38.750896   46120 network_create.go:123] attempt to create docker network newest-cni-277000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 18:07:38.750975   46120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-277000 newest-cni-277000
	I0223 18:07:38.837865   46120 network_create.go:107] docker network newest-cni-277000 192.168.67.0/24 created
	I0223 18:07:38.837904   46120 kic.go:117] calculated static IP "192.168.67.2" for the "newest-cni-277000" container
	I0223 18:07:38.838026   46120 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 18:07:38.895672   46120 cli_runner.go:164] Run: docker volume create newest-cni-277000 --label name.minikube.sigs.k8s.io=newest-cni-277000 --label created_by.minikube.sigs.k8s.io=true
	I0223 18:07:38.950224   46120 oci.go:103] Successfully created a docker volume newest-cni-277000
	I0223 18:07:38.950351   46120 cli_runner.go:164] Run: docker run --rm --name newest-cni-277000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-277000 --entrypoint /usr/bin/test -v newest-cni-277000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 18:07:39.384418   46120 oci.go:107] Successfully prepared a docker volume newest-cni-277000
	I0223 18:07:39.384469   46120 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 18:07:39.384483   46120 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 18:07:39.384596   46120 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-277000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 18:07:46.153333   46120 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-277000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.768477601s)
	I0223 18:07:46.153356   46120 kic.go:199] duration metric: took 6.768669 seconds to extract preloaded images to volume
	I0223 18:07:46.153471   46120 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 18:07:46.305065   46120 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-277000 --name newest-cni-277000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-277000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-277000 --network newest-cni-277000 --ip 192.168.67.2 --volume newest-cni-277000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 18:07:46.675948   46120 cli_runner.go:164] Run: docker container inspect newest-cni-277000 --format={{.State.Running}}
	I0223 18:07:46.744830   46120 cli_runner.go:164] Run: docker container inspect newest-cni-277000 --format={{.State.Status}}
	I0223 18:07:46.809060   46120 cli_runner.go:164] Run: docker exec newest-cni-277000 stat /var/lib/dpkg/alternatives/iptables
	I0223 18:07:46.918106   46120 oci.go:144] the created container "newest-cni-277000" has a running status.
	I0223 18:07:46.918141   46120 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/newest-cni-277000/id_rsa...
	I0223 18:07:47.028427   46120 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/newest-cni-277000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 18:07:47.138603   46120 cli_runner.go:164] Run: docker container inspect newest-cni-277000 --format={{.State.Status}}
	I0223 18:07:47.199264   46120 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 18:07:47.199286   46120 kic_runner.go:114] Args: [docker exec --privileged newest-cni-277000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 18:07:47.302109   46120 cli_runner.go:164] Run: docker container inspect newest-cni-277000 --format={{.State.Status}}
	I0223 18:07:47.359425   46120 machine.go:88] provisioning docker machine ...
	I0223 18:07:47.359464   46120 ubuntu.go:169] provisioning hostname "newest-cni-277000"
	I0223 18:07:47.359588   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:47.418356   46120 main.go:141] libmachine: Using SSH client type: native
	I0223 18:07:47.418752   46120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63127 <nil> <nil>}
	I0223 18:07:47.418790   46120 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-277000 && echo "newest-cni-277000" | sudo tee /etc/hostname
	I0223 18:07:47.561319   46120 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-277000
	
	I0223 18:07:47.561400   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:47.642760   46120 main.go:141] libmachine: Using SSH client type: native
	I0223 18:07:47.643103   46120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63127 <nil> <nil>}
	I0223 18:07:47.643116   46120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-277000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-277000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-277000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 18:07:47.778028   46120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 18:07:47.778049   46120 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
	I0223 18:07:47.778065   46120 ubuntu.go:177] setting up certificates
	I0223 18:07:47.778071   46120 provision.go:83] configureAuth start
	I0223 18:07:47.778151   46120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-277000
	I0223 18:07:47.834935   46120 provision.go:138] copyHostCerts
	I0223 18:07:47.835035   46120 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
	I0223 18:07:47.835044   46120 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
	I0223 18:07:47.835168   46120 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
	I0223 18:07:47.835378   46120 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
	I0223 18:07:47.835384   46120 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
	I0223 18:07:47.835452   46120 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
	I0223 18:07:47.835591   46120 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
	I0223 18:07:47.835597   46120 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
	I0223 18:07:47.835667   46120 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
	I0223 18:07:47.835789   46120 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.newest-cni-277000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-277000]
	I0223 18:07:48.002928   46120 provision.go:172] copyRemoteCerts
	I0223 18:07:48.002985   46120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 18:07:48.003050   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:48.060573   46120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/newest-cni-277000/id_rsa Username:docker}
	I0223 18:07:48.156768   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 18:07:48.174238   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 18:07:48.192940   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0223 18:07:48.211547   46120 provision.go:86] duration metric: configureAuth took 433.447226ms
	I0223 18:07:48.211562   46120 ubuntu.go:193] setting minikube options for container-runtime
	I0223 18:07:48.211733   46120 config.go:182] Loaded profile config "newest-cni-277000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 18:07:48.211809   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:48.272736   46120 main.go:141] libmachine: Using SSH client type: native
	I0223 18:07:48.273091   46120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63127 <nil> <nil>}
	I0223 18:07:48.273105   46120 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 18:07:48.409865   46120 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 18:07:48.409879   46120 ubuntu.go:71] root file system type: overlay
	I0223 18:07:48.409959   46120 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 18:07:48.410051   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:48.467104   46120 main.go:141] libmachine: Using SSH client type: native
	I0223 18:07:48.468608   46120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63127 <nil> <nil>}
	I0223 18:07:48.468659   46120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 18:07:48.613830   46120 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 18:07:48.613935   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:48.670693   46120 main.go:141] libmachine: Using SSH client type: native
	I0223 18:07:48.671055   46120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63127 <nil> <nil>}
	I0223 18:07:48.671068   46120 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 18:07:49.302008   46120 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 02:07:48.610659420 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 18:07:49.302029   46120 machine.go:91] provisioned docker machine in 1.942526166s
	I0223 18:07:49.302035   46120 client.go:171] LocalClient.Create took 10.795705767s
	I0223 18:07:49.302052   46120 start.go:167] duration metric: libmachine.API.Create for "newest-cni-277000" took 10.795761437s
	I0223 18:07:49.302061   46120 start.go:300] post-start starting for "newest-cni-277000" (driver="docker")
	I0223 18:07:49.302066   46120 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 18:07:49.302165   46120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 18:07:49.302222   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:49.365889   46120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/newest-cni-277000/id_rsa Username:docker}
	I0223 18:07:49.462834   46120 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 18:07:49.466477   46120 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 18:07:49.466493   46120 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 18:07:49.466501   46120 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 18:07:49.466506   46120 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 18:07:49.466516   46120 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
	I0223 18:07:49.466614   46120 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
	I0223 18:07:49.466794   46120 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
	I0223 18:07:49.467005   46120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 18:07:49.474393   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
	I0223 18:07:49.491532   46120 start.go:303] post-start completed in 189.456232ms
	I0223 18:07:49.492053   46120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-277000
	I0223 18:07:49.549625   46120 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/config.json ...
	I0223 18:07:49.550080   46120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 18:07:49.550133   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:49.608376   46120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/newest-cni-277000/id_rsa Username:docker}
	I0223 18:07:49.699464   46120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 18:07:49.704588   46120 start.go:128] duration metric: createHost completed in 11.220515435s
	I0223 18:07:49.704625   46120 start.go:83] releasing machines lock for "newest-cni-277000", held for 11.22064193s
	I0223 18:07:49.704728   46120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-277000
	I0223 18:07:49.765383   46120 ssh_runner.go:195] Run: cat /version.json
	I0223 18:07:49.765404   46120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 18:07:49.765449   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:49.765480   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:49.829941   46120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/newest-cni-277000/id_rsa Username:docker}
	I0223 18:07:49.830804   46120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/newest-cni-277000/id_rsa Username:docker}
	I0223 18:07:49.975105   46120 ssh_runner.go:195] Run: systemctl --version
	I0223 18:07:49.979965   46120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 18:07:49.984979   46120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 18:07:50.005194   46120 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 18:07:50.005272   46120 ssh_runner.go:195] Run: which cri-dockerd
	I0223 18:07:50.009576   46120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 18:07:50.017065   46120 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 18:07:50.030229   46120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 18:07:50.045774   46120 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 18:07:50.045790   46120 start.go:485] detecting cgroup driver to use...
	I0223 18:07:50.045802   46120 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 18:07:50.045909   46120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 18:07:50.060748   46120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 18:07:50.069428   46120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 18:07:50.077887   46120 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 18:07:50.077950   46120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 18:07:50.086427   46120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 18:07:50.095120   46120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 18:07:50.103733   46120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 18:07:50.112470   46120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 18:07:50.120555   46120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 18:07:50.129015   46120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 18:07:50.136334   46120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 18:07:50.143558   46120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 18:07:50.208219   46120 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 18:07:50.281368   46120 start.go:485] detecting cgroup driver to use...
	I0223 18:07:50.281386   46120 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 18:07:50.281441   46120 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 18:07:50.292039   46120 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 18:07:50.292107   46120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 18:07:50.303082   46120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 18:07:50.318008   46120 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 18:07:50.411331   46120 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 18:07:50.501423   46120 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 18:07:50.501443   46120 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 18:07:50.515463   46120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 18:07:50.614793   46120 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 18:07:50.846869   46120 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 18:07:50.915559   46120 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 18:07:50.984395   46120 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 18:07:51.052064   46120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 18:07:51.120819   46120 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 18:07:51.132258   46120 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 18:07:51.132343   46120 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 18:07:51.136285   46120 start.go:553] Will wait 60s for crictl version
	I0223 18:07:51.136341   46120 ssh_runner.go:195] Run: which crictl
	I0223 18:07:51.139826   46120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 18:07:51.229623   46120 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 18:07:51.229715   46120 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 18:07:51.256172   46120 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 18:07:51.325616   46120 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 18:07:51.325815   46120 cli_runner.go:164] Run: docker exec -t newest-cni-277000 dig +short host.docker.internal
	I0223 18:07:51.439661   46120 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 18:07:51.439778   46120 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 18:07:51.444465   46120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 18:07:51.454568   46120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-277000
	I0223 18:07:51.534925   46120 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0223 18:07:51.556913   46120 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 18:07:51.557077   46120 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 18:07:51.579024   46120 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 18:07:51.579040   46120 docker.go:560] Images already preloaded, skipping extraction
	I0223 18:07:51.579121   46120 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 18:07:51.600026   46120 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 18:07:51.600042   46120 cache_images.go:84] Images are preloaded, skipping loading
	I0223 18:07:51.600165   46120 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 18:07:51.625791   46120 cni.go:84] Creating CNI manager for ""
	I0223 18:07:51.625809   46120 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 18:07:51.625827   46120 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0223 18:07:51.625845   46120 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-277000 NodeName:newest-cni-277000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 18:07:51.625989   46120 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-277000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 18:07:51.626062   46120 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-277000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:newest-cni-277000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 18:07:51.626130   46120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 18:07:51.634253   46120 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 18:07:51.634326   46120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 18:07:51.641779   46120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0223 18:07:51.654512   46120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 18:07:51.667652   46120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0223 18:07:51.681003   46120 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0223 18:07:51.684921   46120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 18:07:51.694845   46120 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000 for IP: 192.168.67.2
	I0223 18:07:51.694862   46120 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 18:07:51.695042   46120 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
	I0223 18:07:51.695102   46120 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
	I0223 18:07:51.695146   46120 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/client.key
	I0223 18:07:51.695162   46120 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/client.crt with IP's: []
	I0223 18:07:51.812791   46120 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/client.crt ...
	I0223 18:07:51.812800   46120 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/client.crt: {Name:mkb6ef5b2ff0c9c38623745776e4e7ba5220c693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 18:07:51.813132   46120 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/client.key ...
	I0223 18:07:51.813140   46120 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/client.key: {Name:mk69bf8bf454443038612e112203322cfd67c140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 18:07:51.813348   46120 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.key.c7fa3a9e
	I0223 18:07:51.813363   46120 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 18:07:51.907008   46120 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.crt.c7fa3a9e ...
	I0223 18:07:51.907020   46120 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.crt.c7fa3a9e: {Name:mk75eafd8b0c8f96a5eadebd3849554c6183bf2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 18:07:51.907353   46120 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.key.c7fa3a9e ...
	I0223 18:07:51.907361   46120 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.key.c7fa3a9e: {Name:mkb8e985b2d99fc6a3f1c6a3109559974f2e3311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 18:07:51.907537   46120 certs.go:333] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.crt
	I0223 18:07:51.907700   46120 certs.go:337] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.key
	I0223 18:07:51.907853   46120 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/proxy-client.key
	I0223 18:07:51.907870   46120 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/proxy-client.crt with IP's: []
	I0223 18:07:52.059082   46120 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/proxy-client.crt ...
	I0223 18:07:52.059100   46120 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/proxy-client.crt: {Name:mk165d964c26b657cd8ec97add204a20bd81b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 18:07:52.059433   46120 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/proxy-client.key ...
	I0223 18:07:52.059441   46120 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/proxy-client.key: {Name:mk55c9f2ac5978caf2a20928b289fa2213a13cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 18:07:52.059841   46120 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
	W0223 18:07:52.059891   46120 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
	I0223 18:07:52.059901   46120 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 18:07:52.059937   46120 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
	I0223 18:07:52.059969   46120 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
	I0223 18:07:52.060004   46120 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
	I0223 18:07:52.060077   46120 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
	I0223 18:07:52.061442   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 18:07:52.080416   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 18:07:52.097690   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 18:07:52.115205   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/newest-cni-277000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0223 18:07:52.132733   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 18:07:52.150034   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 18:07:52.167440   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 18:07:52.184743   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 18:07:52.202218   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
	I0223 18:07:52.220025   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
	I0223 18:07:52.237476   46120 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 18:07:52.255033   46120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 18:07:52.268043   46120 ssh_runner.go:195] Run: openssl version
	I0223 18:07:52.273727   46120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
	I0223 18:07:52.282096   46120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
	I0223 18:07:52.286186   46120 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
	I0223 18:07:52.286262   46120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
	I0223 18:07:52.291766   46120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 18:07:52.300085   46120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 18:07:52.308302   46120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 18:07:52.312315   46120 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0223 18:07:52.312363   46120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 18:07:52.317782   46120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 18:07:52.326234   46120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
	I0223 18:07:52.334370   46120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
	I0223 18:07:52.338503   46120 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
	I0223 18:07:52.338567   46120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
	I0223 18:07:52.344223   46120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
	I0223 18:07:52.352406   46120 kubeadm.go:401] StartCluster: {Name:newest-cni-277000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-277000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 18:07:52.352511   46120 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 18:07:52.371658   46120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 18:07:52.379733   46120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 18:07:52.387454   46120 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 18:07:52.387506   46120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 18:07:52.395348   46120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 18:07:52.395379   46120 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 18:07:52.447511   46120 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 18:07:52.447559   46120 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 18:07:52.556546   46120 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 18:07:52.556617   46120 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 18:07:52.556685   46120 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 18:07:52.697905   46120 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 18:07:52.719464   46120 out.go:204]   - Generating certificates and keys ...
	I0223 18:07:52.719568   46120 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 18:07:52.719656   46120 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 18:07:53.143357   46120 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 18:07:53.341461   46120 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 18:07:53.432026   46120 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 18:07:53.606631   46120 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 18:07:53.685791   46120 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 18:07:53.686005   46120 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-277000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0223 18:07:53.940959   46120 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 18:07:53.941075   46120 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-277000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0223 18:07:54.196956   46120 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 18:07:54.281973   46120 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 18:07:54.392432   46120 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 18:07:54.392499   46120 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 18:07:54.537372   46120 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 18:07:54.699526   46120 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 18:07:54.898971   46120 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 18:07:55.037143   46120 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 18:07:55.048003   46120 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 18:07:55.048677   46120 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 18:07:55.048739   46120 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 18:07:55.122579   46120 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 18:07:55.164858   46120 out.go:204]   - Booting up control plane ...
	I0223 18:07:55.164958   46120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 18:07:55.165042   46120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 18:07:55.165103   46120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 18:07:55.165187   46120 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 18:07:55.165322   46120 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 01:41:01 UTC, end at Fri 2023-02-24 02:08:02 UTC. --
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.533706459Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534051984Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534106084Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534921800Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534961430Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534980167Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.534989586Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535101973Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535158123Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535182760Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535229638Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535302939Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535357417Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535589749Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.535654552Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.536109842Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.624542375Z" level=info msg="Loading containers: start."
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.705830368Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.738778244Z" level=info msg="Loading containers: done."
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.747281313Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.747342089Z" level=info msg="Daemon has completed initialization"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.769248376Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 01:41:04 old-k8s-version-977000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.772753001Z" level=info msg="API listen on [::]:2376"
	Feb 24 01:41:04 old-k8s-version-977000 dockerd[637]: time="2023-02-24T01:41:04.779603495Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-02-24T02:08:05Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  02:08:05 up  3:07,  0 users,  load average: 0.45, 0.62, 0.86
	Linux old-k8s-version-977000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 01:41:01 UTC, end at Fri 2023-02-24 02:08:05 UTC. --
	Feb 24 02:08:03 old-k8s-version-977000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 02:08:04 old-k8s-version-977000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Feb 24 02:08:04 old-k8s-version-977000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 02:08:04 old-k8s-version-977000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 02:08:04 old-k8s-version-977000 kubelet[33983]: I0224 02:08:04.749709   33983 server.go:410] Version: v1.16.0
	Feb 24 02:08:04 old-k8s-version-977000 kubelet[33983]: I0224 02:08:04.749868   33983 plugins.go:100] No cloud provider specified.
	Feb 24 02:08:04 old-k8s-version-977000 kubelet[33983]: I0224 02:08:04.749880   33983 server.go:773] Client rotation is on, will bootstrap in background
	Feb 24 02:08:04 old-k8s-version-977000 kubelet[33983]: I0224 02:08:04.751650   33983 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 02:08:04 old-k8s-version-977000 kubelet[33983]: W0224 02:08:04.752400   33983 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 24 02:08:04 old-k8s-version-977000 kubelet[33983]: W0224 02:08:04.752468   33983 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 24 02:08:04 old-k8s-version-977000 kubelet[33983]: F0224 02:08:04.752492   33983 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 24 02:08:04 old-k8s-version-977000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 02:08:04 old-k8s-version-977000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 02:08:05 old-k8s-version-977000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Feb 24 02:08:05 old-k8s-version-977000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 02:08:05 old-k8s-version-977000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 02:08:05 old-k8s-version-977000 kubelet[34012]: I0224 02:08:05.482278   34012 server.go:410] Version: v1.16.0
	Feb 24 02:08:05 old-k8s-version-977000 kubelet[34012]: I0224 02:08:05.482493   34012 plugins.go:100] No cloud provider specified.
	Feb 24 02:08:05 old-k8s-version-977000 kubelet[34012]: I0224 02:08:05.482504   34012 server.go:773] Client rotation is on, will bootstrap in background
	Feb 24 02:08:05 old-k8s-version-977000 kubelet[34012]: I0224 02:08:05.484166   34012 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 02:08:05 old-k8s-version-977000 kubelet[34012]: W0224 02:08:05.484862   34012 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 24 02:08:05 old-k8s-version-977000 kubelet[34012]: W0224 02:08:05.484931   34012 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 24 02:08:05 old-k8s-version-977000 kubelet[34012]: F0224 02:08:05.484958   34012 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 24 02:08:05 old-k8s-version-977000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 02:08:05 old-k8s-version-977000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 18:08:05.334493   46270 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 2 (492.707992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-977000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555.29s)

                                                
                                    

Test pass (272/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 24.8
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.26.1/json-events 26.55
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.66
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
18 TestDownloadOnlyKic 2.09
19 TestBinaryMirror 1.68
20 TestOffline 60.11
22 TestAddons/Setup 153.92
26 TestAddons/parallel/MetricsServer 5.63
27 TestAddons/parallel/HelmTiller 13
29 TestAddons/parallel/CSI 65.82
30 TestAddons/parallel/Headlamp 12.4
31 TestAddons/parallel/CloudSpanner 5.45
34 TestAddons/serial/GCPAuth/Namespaces 0.1
35 TestAddons/StoppedEnableDisable 11.42
36 TestCertOptions 38.22
37 TestCertExpiration 248.12
38 TestDockerFlags 35.93
39 TestForceSystemdFlag 39.77
40 TestForceSystemdEnv 40.08
42 TestHyperKitDriverInstallOrUpdate 6.58
45 TestErrorSpam/setup 28.33
46 TestErrorSpam/start 2.36
47 TestErrorSpam/status 1.23
48 TestErrorSpam/pause 1.76
49 TestErrorSpam/unpause 1.91
50 TestErrorSpam/stop 11.44
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 51.38
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 44.57
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 7.96
62 TestFunctional/serial/CacheCmd/cache/add_local 1.64
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.83
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.53
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.8
70 TestFunctional/serial/ExtraConfig 46.12
71 TestFunctional/serial/ComponentHealth 0.05
72 TestFunctional/serial/LogsCmd 3.04
73 TestFunctional/serial/LogsFileCmd 3.01
75 TestFunctional/parallel/ConfigCmd 0.44
76 TestFunctional/parallel/DashboardCmd 10.58
77 TestFunctional/parallel/DryRun 1.85
78 TestFunctional/parallel/InternationalLanguage 0.92
79 TestFunctional/parallel/StatusCmd 1.29
84 TestFunctional/parallel/AddonsCmd 0.24
85 TestFunctional/parallel/PersistentVolumeClaim 25.32
87 TestFunctional/parallel/SSHCmd 0.83
88 TestFunctional/parallel/CpCmd 2.09
89 TestFunctional/parallel/MySQL 30.87
90 TestFunctional/parallel/FileSync 0.52
91 TestFunctional/parallel/CertSync 2.7
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
99 TestFunctional/parallel/License 0.76
100 TestFunctional/parallel/Version/short 0.09
101 TestFunctional/parallel/Version/components 1
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
106 TestFunctional/parallel/ImageCommands/ImageBuild 5.24
107 TestFunctional/parallel/ImageCommands/Setup 2.83
108 TestFunctional/parallel/DockerEnv/bash 2
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.4
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.84
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.42
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.04
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.04
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.86
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.59
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.53
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.15
123 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 0.62
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
131 TestFunctional/parallel/ProfileCmd/profile_list 0.48
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
133 TestFunctional/parallel/MountCmd/any-port 9.56
134 TestFunctional/parallel/MountCmd/specific-port 2.35
135 TestFunctional/delete_addon-resizer_images 0.15
136 TestFunctional/delete_my-image_image 0.06
137 TestFunctional/delete_minikube_cached_images 0.06
141 TestImageBuild/serial/NormalBuild 2.28
142 TestImageBuild/serial/BuildWithBuildArg 0.94
143 TestImageBuild/serial/BuildWithDockerIgnore 0.47
144 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.41
154 TestJSONOutput/start/Command 44.41
155 TestJSONOutput/start/Audit 0
157 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/pause/Command 0.6
161 TestJSONOutput/pause/Audit 0
163 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/unpause/Command 0.58
167 TestJSONOutput/unpause/Audit 0
169 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/stop/Command 5.82
173 TestJSONOutput/stop/Audit 0
175 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
177 TestErrorJSONOutput 0.74
179 TestKicCustomNetwork/create_custom_network 31.95
180 TestKicCustomNetwork/use_default_bridge_network 30.79
181 TestKicExistingNetwork 30.95
182 TestKicCustomSubnet 30.31
183 TestKicStaticIP 35.91
184 TestMainNoArgs 0.07
185 TestMinikubeProfile 70.23
188 TestMountStart/serial/StartWithMountFirst 8.12
189 TestMountStart/serial/VerifyMountFirst 0.4
190 TestMountStart/serial/StartWithMountSecond 8.41
191 TestMountStart/serial/VerifyMountSecond 0.4
192 TestMountStart/serial/DeleteFirst 2.13
193 TestMountStart/serial/VerifyMountPostDelete 0.4
194 TestMountStart/serial/Stop 1.57
195 TestMountStart/serial/RestartStopped 6.17
196 TestMountStart/serial/VerifyMountPostStop 0.4
199 TestMultiNode/serial/FreshStart2Nodes 78.75
202 TestMultiNode/serial/AddNode 25.02
203 TestMultiNode/serial/ProfileList 0.42
204 TestMultiNode/serial/CopyFile 14.54
205 TestMultiNode/serial/StopNode 3.01
206 TestMultiNode/serial/StartAfterStop 12.96
207 TestMultiNode/serial/RestartKeepsNodes 113.25
208 TestMultiNode/serial/DeleteNode 6.13
209 TestMultiNode/serial/StopMultiNode 21.92
210 TestMultiNode/serial/RestartMultiNode 53.44
211 TestMultiNode/serial/ValidateNameConflict 32.94
215 TestPreload 143.53
217 TestScheduledStopUnix 103.19
218 TestSkaffold 64.75
220 TestInsufficientStorage 14.7
236 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 22.18
237 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 17.43
238 TestStoppedBinaryUpgrade/Setup 4.21
240 TestStoppedBinaryUpgrade/MinikubeLogs 3.54
242 TestPause/serial/Start 45.03
243 TestPause/serial/SecondStartNoReconfiguration 50.67
244 TestPause/serial/Pause 0.66
245 TestPause/serial/VerifyStatus 0.41
246 TestPause/serial/Unpause 0.64
247 TestPause/serial/PauseAgain 0.72
248 TestPause/serial/DeletePaused 2.62
249 TestPause/serial/VerifyDeletedResources 0.56
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.43
259 TestNoKubernetes/serial/StartWithK8s 31.04
260 TestNoKubernetes/serial/StartWithStopK8s 9.97
261 TestNoKubernetes/serial/Start 9.07
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.41
263 TestNoKubernetes/serial/ProfileList 1.38
264 TestNoKubernetes/serial/Stop 1.55
265 TestNoKubernetes/serial/StartNoArgs 8.5
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
267 TestNetworkPlugins/group/auto/Start 45.49
268 TestNetworkPlugins/group/auto/KubeletFlags 0.4
269 TestNetworkPlugins/group/auto/NetCatPod 13.19
270 TestNetworkPlugins/group/auto/DNS 0.13
271 TestNetworkPlugins/group/auto/Localhost 0.11
272 TestNetworkPlugins/group/auto/HairPin 0.11
273 TestNetworkPlugins/group/calico/Start 74.09
274 TestNetworkPlugins/group/calico/ControllerPod 5.02
275 TestNetworkPlugins/group/calico/KubeletFlags 0.4
276 TestNetworkPlugins/group/calico/NetCatPod 12.22
277 TestNetworkPlugins/group/calico/DNS 0.11
278 TestNetworkPlugins/group/calico/Localhost 0.15
279 TestNetworkPlugins/group/calico/HairPin 0.13
280 TestNetworkPlugins/group/custom-flannel/Start 58.32
281 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
282 TestNetworkPlugins/group/custom-flannel/NetCatPod 17.22
283 TestNetworkPlugins/group/custom-flannel/DNS 0.14
284 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
285 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
286 TestNetworkPlugins/group/false/Start 48.05
287 TestNetworkPlugins/group/kindnet/Start 58.97
288 TestNetworkPlugins/group/false/KubeletFlags 0.41
289 TestNetworkPlugins/group/false/NetCatPod 11.2
290 TestNetworkPlugins/group/false/DNS 0.13
291 TestNetworkPlugins/group/false/Localhost 0.11
292 TestNetworkPlugins/group/false/HairPin 0.12
293 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
295 TestNetworkPlugins/group/kindnet/NetCatPod 13.22
296 TestNetworkPlugins/group/kindnet/DNS 0.15
297 TestNetworkPlugins/group/kindnet/Localhost 0.12
298 TestNetworkPlugins/group/kindnet/HairPin 0.14
299 TestNetworkPlugins/group/flannel/Start 60.06
300 TestNetworkPlugins/group/enable-default-cni/Start 46.39
301 TestNetworkPlugins/group/flannel/ControllerPod 5.01
302 TestNetworkPlugins/group/flannel/KubeletFlags 0.55
303 TestNetworkPlugins/group/flannel/NetCatPod 12.19
304 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.49
305 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
306 TestNetworkPlugins/group/flannel/DNS 0.12
307 TestNetworkPlugins/group/flannel/Localhost 0.11
308 TestNetworkPlugins/group/flannel/HairPin 0.11
309 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
310 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
311 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
312 TestNetworkPlugins/group/bridge/Start 50.17
313 TestNetworkPlugins/group/kubenet/Start 45.21
314 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
315 TestNetworkPlugins/group/kubenet/NetCatPod 11.19
316 TestNetworkPlugins/group/bridge/KubeletFlags 0.47
317 TestNetworkPlugins/group/bridge/NetCatPod 11.29
318 TestNetworkPlugins/group/kubenet/DNS 0.14
319 TestNetworkPlugins/group/kubenet/Localhost 0.11
320 TestNetworkPlugins/group/kubenet/HairPin 0.11
321 TestNetworkPlugins/group/bridge/DNS 0.13
322 TestNetworkPlugins/group/bridge/Localhost 0.14
323 TestNetworkPlugins/group/bridge/HairPin 0.11
327 TestStartStop/group/no-preload/serial/FirstStart 68.16
328 TestStartStop/group/no-preload/serial/DeployApp 9.27
329 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
330 TestStartStop/group/no-preload/serial/Stop 10.99
331 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.37
332 TestStartStop/group/no-preload/serial/SecondStart 553.57
335 TestStartStop/group/old-k8s-version/serial/Stop 1.59
336 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.37
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.43
341 TestStartStop/group/no-preload/serial/Pause 3.13
343 TestStartStop/group/embed-certs/serial/FirstStart 44.74
344 TestStartStop/group/embed-certs/serial/DeployApp 10.28
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.86
346 TestStartStop/group/embed-certs/serial/Stop 11.06
347 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.37
348 TestStartStop/group/embed-certs/serial/SecondStart 556.96
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
353 TestStartStop/group/embed-certs/serial/Pause 3.17
355 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.95
356 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
358 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.03
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.37
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 560.36
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.43
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.17
367 TestStartStop/group/newest-cni/serial/FirstStart 41.53
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
370 TestStartStop/group/newest-cni/serial/Stop 11.1
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
372 TestStartStop/group/newest-cni/serial/SecondStart 24.71
373 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.45
376 TestStartStop/group/newest-cni/serial/Pause 3.21
x
+
TestDownloadOnly/v1.16.0/json-events (24.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-435000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-435000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (24.79595903s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (24.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-435000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-435000: exit status 85 (310.030696ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-435000 | jenkins | v1.29.0 | 23 Feb 23 16:40 PST |          |
	|         | -p download-only-435000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 16:40:24
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 16:40:24.797767   24889 out.go:296] Setting OutFile to fd 1 ...
	I0223 16:40:24.797943   24889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:40:24.797949   24889 out.go:309] Setting ErrFile to fd 2...
	I0223 16:40:24.797953   24889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:40:24.798057   24889 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	W0223 16:40:24.798154   24889 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-24428/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-24428/.minikube/config/config.json: no such file or directory
	I0223 16:40:24.799681   24889 out.go:303] Setting JSON to true
	I0223 16:40:24.818039   24889 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5999,"bootTime":1677193225,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 16:40:24.818721   24889 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 16:40:24.841229   24889 out.go:97] [download-only-435000] minikube v1.29.0 on Darwin 13.2
	I0223 16:40:24.841467   24889 notify.go:220] Checking for updates...
	W0223 16:40:24.841477   24889 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball: no such file or directory
	I0223 16:40:24.863020   24889 out.go:169] MINIKUBE_LOCATION=15909
	I0223 16:40:24.885245   24889 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 16:40:24.907043   24889 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 16:40:24.929237   24889 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 16:40:24.950903   24889 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	W0223 16:40:24.993032   24889 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 16:40:24.993493   24889 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 16:40:25.055419   24889 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 16:40:25.055552   24889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 16:40:25.195377   24889 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 00:40:25.104800414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 16:40:25.217170   24889 out.go:97] Using the docker driver based on user configuration
	I0223 16:40:25.217216   24889 start.go:296] selected driver: docker
	I0223 16:40:25.217227   24889 start.go:857] validating driver "docker" against <nil>
	I0223 16:40:25.217371   24889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 16:40:25.357375   24889 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 00:40:25.26643101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 16:40:25.357513   24889 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 16:40:25.359901   24889 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0223 16:40:25.360041   24889 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 16:40:25.381866   24889 out.go:169] Using Docker Desktop driver with root privileges
	I0223 16:40:25.403576   24889 cni.go:84] Creating CNI manager for ""
	I0223 16:40:25.403601   24889 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 16:40:25.403625   24889 start_flags.go:319] config:
	{Name:download-only-435000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-435000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 16:40:25.424692   24889 out.go:97] Starting control plane node download-only-435000 in cluster download-only-435000
	I0223 16:40:25.424842   24889 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 16:40:25.446525   24889 out.go:97] Pulling base image ...
	I0223 16:40:25.446585   24889 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 16:40:25.446700   24889 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 16:40:25.501743   24889 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 16:40:25.501998   24889 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0223 16:40:25.502134   24889 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 16:40:25.553932   24889 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 16:40:25.553962   24889 cache.go:57] Caching tarball of preloaded images
	I0223 16:40:25.554287   24889 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 16:40:25.576467   24889 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0223 16:40:25.576506   24889 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 16:40:25.785080   24889 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 16:40:43.350232   24889 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 16:40:43.350365   24889 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 16:40:43.909287   24889 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 16:40:43.909483   24889 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/download-only-435000/config.json ...
	I0223 16:40:43.909508   24889 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/download-only-435000/config.json: {Name:mk9fd68d8c33c2ce435ca9842f1f93bed8bf9c3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 16:40:43.909766   24889 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 16:40:43.910025   24889 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-435000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (26.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-435000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-435000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (26.550548355s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (26.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-435000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-435000: exit status 85 (284.115105ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-435000 | jenkins | v1.29.0 | 23 Feb 23 16:40 PST |          |
	|         | -p download-only-435000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-435000 | jenkins | v1.29.0 | 23 Feb 23 16:40 PST |          |
	|         | -p download-only-435000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 16:40:49
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 16:40:49.907567   24940 out.go:296] Setting OutFile to fd 1 ...
	I0223 16:40:49.907724   24940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:40:49.907729   24940 out.go:309] Setting ErrFile to fd 2...
	I0223 16:40:49.907733   24940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:40:49.907850   24940 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	W0223 16:40:49.907946   24940 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-24428/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-24428/.minikube/config/config.json: no such file or directory
	I0223 16:40:49.909108   24940 out.go:303] Setting JSON to true
	I0223 16:40:49.928024   24940 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6024,"bootTime":1677193225,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 16:40:49.928108   24940 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 16:40:49.949376   24940 out.go:97] [download-only-435000] minikube v1.29.0 on Darwin 13.2
	I0223 16:40:49.949608   24940 notify.go:220] Checking for updates...
	I0223 16:40:49.971264   24940 out.go:169] MINIKUBE_LOCATION=15909
	I0223 16:40:49.992292   24940 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 16:40:50.013402   24940 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 16:40:50.035261   24940 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 16:40:50.056453   24940 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	W0223 16:40:50.100024   24940 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 16:40:50.100657   24940 config.go:182] Loaded profile config "download-only-435000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0223 16:40:50.100745   24940 start.go:765] api.Load failed for download-only-435000: filestore "download-only-435000": Docker machine "download-only-435000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 16:40:50.100819   24940 driver.go:365] Setting default libvirt URI to qemu:///system
	W0223 16:40:50.100867   24940 start.go:765] api.Load failed for download-only-435000: filestore "download-only-435000": Docker machine "download-only-435000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 16:40:50.160428   24940 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 16:40:50.160538   24940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 16:40:50.300397   24940 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 00:40:50.208419189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 16:40:50.322976   24940 out.go:97] Using the docker driver based on existing profile
	I0223 16:40:50.323014   24940 start.go:296] selected driver: docker
	I0223 16:40:50.323025   24940 start.go:857] validating driver "docker" against &{Name:download-only-435000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-435000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0223 16:40:50.323325   24940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 16:40:50.466785   24940 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 00:40:50.373216936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 16:40:50.469257   24940 cni.go:84] Creating CNI manager for ""
	I0223 16:40:50.469283   24940 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 16:40:50.469297   24940 start_flags.go:319] config:
	{Name:download-only-435000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-435000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 16:40:50.491344   24940 out.go:97] Starting control plane node download-only-435000 in cluster download-only-435000
	I0223 16:40:50.491477   24940 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 16:40:50.512925   24940 out.go:97] Pulling base image ...
	I0223 16:40:50.513044   24940 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 16:40:50.513144   24940 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 16:40:50.567337   24940 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 16:40:50.567557   24940 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0223 16:40:50.567582   24940 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory, skipping pull
	I0223 16:40:50.567589   24940 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in cache, skipping pull
	I0223 16:40:50.567596   24940 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc as a tarball
	I0223 16:40:50.617330   24940 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 16:40:50.617358   24940 cache.go:57] Caching tarball of preloaded images
	I0223 16:40:50.617668   24940 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 16:40:50.638828   24940 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0223 16:40:50.638892   24940 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0223 16:40:50.842558   24940 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 16:41:09.887175   24940 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0223 16:41:09.887332   24940 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0223 16:41:10.494284   24940 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 16:41:10.494360   24940 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/download-only-435000/config.json ...
	I0223 16:41:10.494747   24940 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 16:41:10.495013   24940 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.1/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.1/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/darwin/amd64/v1.26.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-435000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-435000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.09s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-847000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-847000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-847000
--- PASS: TestDownloadOnlyKic (2.09s)

                                                
                                    
x
+
TestBinaryMirror (1.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-809000 --alsologtostderr --binary-mirror http://127.0.0.1:56414 --driver=docker 
aaa_download_only_test.go:308: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-809000 --alsologtostderr --binary-mirror http://127.0.0.1:56414 --driver=docker : (1.064814832s)
helpers_test.go:175: Cleaning up "binary-mirror-809000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-809000
--- PASS: TestBinaryMirror (1.68s)

                                                
                                    
x
+
TestOffline (60.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-276000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-276000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (57.026571613s)
helpers_test.go:175: Cleaning up "offline-docker-276000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-276000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-276000: (3.081098102s)
--- PASS: TestOffline (60.11s)

                                                
                                    
x
+
TestAddons/Setup (153.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-106000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-106000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m33.923981712s)
--- PASS: TestAddons/Setup (153.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.271418ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-z7pm9" [2bde60f8-086b-4e16-9e34-e80f484396c2] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008910662s
addons_test.go:380: (dbg) Run:  kubectl --context addons-106000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-106000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.72937ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-zqgdm" [0e15a284-a074-4354-94d5-8796a98821ce] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008831769s
addons_test.go:438: (dbg) Run:  kubectl --context addons-106000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-106000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.465962852s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-106000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 4.645305ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-106000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-106000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f65c0b4e-68ee-42c7-8903-21bb5f68fdd7] Pending
helpers_test.go:344: "task-pv-pod" [f65c0b4e-68ee-42c7-8903-21bb5f68fdd7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f65c0b4e-68ee-42c7-8903-21bb5f68fdd7] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.009022009s
addons_test.go:549: (dbg) Run:  kubectl --context addons-106000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-106000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-106000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-106000 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-106000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-106000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-106000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-106000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0d8629b0-3643-41da-a4e5-f3da30457e09] Pending
helpers_test.go:344: "task-pv-pod-restore" [0d8629b0-3643-41da-a4e5-f3da30457e09] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0d8629b0-3643-41da-a4e5-f3da30457e09] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.010407849s
addons_test.go:591: (dbg) Run:  kubectl --context addons-106000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-106000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-106000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-106000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-106000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.616341216s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-106000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-106000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-106000 --alsologtostderr -v=1: (2.391664062s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-v28rd" [aa491f10-e35e-4278-b43a-696f76cd2abc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-v28rd" [aa491f10-e35e-4278-b43a-696f76cd2abc] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.009210481s
--- PASS: TestAddons/parallel/Headlamp (12.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-2p279" [157b6a75-c179-435a-b46a-7bf08c379e8c] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007382052s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-106000
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-106000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-106000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-106000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-106000: (10.970303468s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-106000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-106000
--- PASS: TestAddons/StoppedEnableDisable (11.42s)

                                                
                                    
x
+
TestCertOptions (38.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-171000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-171000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (34.764486s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-171000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-171000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-171000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-171000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-171000: (2.594661922s)
--- PASS: TestCertOptions (38.22s)

                                                
                                    
x
+
TestCertExpiration (248.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=3m --driver=docker : (33.389407732s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0223 17:21:03.317395   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=8760h --driver=docker : (32.068514492s)
helpers_test.go:175: Cleaning up "cert-expiration-802000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-802000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-802000: (2.662918559s)
--- PASS: TestCertExpiration (248.12s)

                                                
                                    
x
+
TestDockerFlags (35.93s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-263000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-263000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (32.302051686s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-263000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-263000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-263000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-263000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-263000: (2.647022257s)
--- PASS: TestDockerFlags (35.93s)

                                                
                                    
x
+
TestForceSystemdFlag (39.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-400000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-400000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (36.174452891s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-400000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-400000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-400000: (3.068602308s)
--- PASS: TestForceSystemdFlag (39.77s)

                                                
                                    
x
+
TestForceSystemdEnv (40.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-641000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-641000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (36.900628631s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-641000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-641000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-641000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-641000: (2.75818642s)
--- PASS: TestForceSystemdEnv (40.08s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.58s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.58s)

                                                
                                    
x
+
TestErrorSpam/setup (28.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-867000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-867000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 --driver=docker : (28.326677554s)
--- PASS: TestErrorSpam/setup (28.33s)

                                                
                                    
x
+
TestErrorSpam/start (2.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 start --dry-run
--- PASS: TestErrorSpam/start (2.36s)

                                                
                                    
x
+
TestErrorSpam/status (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 status
--- PASS: TestErrorSpam/status (1.23s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (11.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 stop: (10.833277038s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-867000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-867000 stop
--- PASS: TestErrorSpam/stop (11.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/test/nested/copy/24885/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-523000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2199: (dbg) Done: out/minikube-darwin-amd64 start -p functional-523000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (51.377010479s)
--- PASS: TestFunctional/serial/StartWithProxy (51.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-523000 --alsologtostderr -v=8
functional_test.go:653: (dbg) Done: out/minikube-darwin-amd64 start -p functional-523000 --alsologtostderr -v=8: (44.573129245s)
functional_test.go:657: soft start took 44.573722638s for "functional-523000" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-523000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 cache add k8s.gcr.io/pause:3.1: (2.737460877s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 cache add k8s.gcr.io/pause:3.3: (2.687776762s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 cache add k8s.gcr.io/pause:latest
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 cache add k8s.gcr.io/pause:latest: (2.534479038s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-523000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1290117760/001
functional_test.go:1083: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 cache add minikube-local-cache-test:functional-523000
functional_test.go:1083: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 cache add minikube-local-cache-test:functional-523000: (1.11383766s)
functional_test.go:1088: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 cache delete minikube-local-cache-test:functional-523000
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-523000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-523000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (389.980891ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 cache reload
functional_test.go:1152: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 cache reload: (1.602638858s)
functional_test.go:1157: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 kubectl -- --context functional-523000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-523000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.80s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-523000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0223 16:48:55.795584   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:48:55.802022   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:48:55.814334   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:48:55.834574   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:48:55.874959   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:48:55.957128   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:48:56.117622   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:48:56.439839   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:48:57.082094   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:48:58.362408   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:49:00.924115   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:49:06.045715   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:49:16.286126   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
functional_test.go:751: (dbg) Done: out/minikube-darwin-amd64 start -p functional-523000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.118202169s)
functional_test.go:755: restart took 46.118328049s for "functional-523000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-523000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 logs
functional_test.go:1230: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 logs: (3.040526131s)
--- PASS: TestFunctional/serial/LogsCmd (3.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4209343873/001/logs.txt
E0223 16:49:36.767569   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
functional_test.go:1244: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4209343873/001/logs.txt: (3.012715025s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-523000 config get cpus: exit status 14 (46.429288ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 config set cpus 2
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-523000 config get cpus: exit status 14 (66.314647ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-523000 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-523000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 27510: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-523000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:968: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-523000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (889.997983ms)

                                                
                                                
-- stdout --
	* [functional-523000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 16:50:51.821661   27433 out.go:296] Setting OutFile to fd 1 ...
	I0223 16:50:51.821840   27433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:50:51.821845   27433 out.go:309] Setting ErrFile to fd 2...
	I0223 16:50:51.821849   27433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:50:51.821954   27433 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 16:50:51.823277   27433 out.go:303] Setting JSON to false
	I0223 16:50:51.841570   27433 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6626,"bootTime":1677193225,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 16:50:51.841759   27433 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 16:50:51.863633   27433 out.go:177] * [functional-523000] minikube v1.29.0 on Darwin 13.2
	I0223 16:50:51.905478   27433 notify.go:220] Checking for updates...
	I0223 16:50:51.926312   27433 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 16:50:51.947537   27433 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 16:50:51.968519   27433 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 16:50:52.010670   27433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 16:50:52.053234   27433 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 16:50:52.137355   27433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 16:50:52.159105   27433 config.go:182] Loaded profile config "functional-523000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 16:50:52.159817   27433 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 16:50:52.338339   27433 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 16:50:52.338544   27433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 16:50:52.492521   27433 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 00:50:52.391404274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 16:50:52.513771   27433 out.go:177] * Using the docker driver based on existing profile
	I0223 16:50:52.556354   27433 start.go:296] selected driver: docker
	I0223 16:50:52.556370   27433 start.go:857] validating driver "docker" against &{Name:functional-523000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-523000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 16:50:52.556481   27433 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 16:50:52.580386   27433 out.go:177] 
	W0223 16:50:52.601224   27433 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0223 16:50:52.622339   27433 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-523000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-523000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-523000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (915.431897ms)

                                                
                                                
-- stdout --
	* [functional-523000] minikube v1.29.0 sur Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 16:50:52.010856   27438 out.go:296] Setting OutFile to fd 1 ...
	I0223 16:50:52.011550   27438 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:50:52.011573   27438 out.go:309] Setting ErrFile to fd 2...
	I0223 16:50:52.011588   27438 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 16:50:52.012126   27438 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 16:50:52.032790   27438 out.go:303] Setting JSON to false
	I0223 16:50:52.051472   27438 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6627,"bootTime":1677193225,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0223 16:50:52.051568   27438 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 16:50:52.074448   27438 out.go:177] * [functional-523000] minikube v1.29.0 sur Darwin 13.2
	I0223 16:50:52.137458   27438 notify.go:220] Checking for updates...
	I0223 16:50:52.179197   27438 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 16:50:52.200432   27438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	I0223 16:50:52.242154   27438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 16:50:52.284405   27438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 16:50:52.326602   27438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	I0223 16:50:52.368167   27438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 16:50:52.389730   27438 config.go:182] Loaded profile config "functional-523000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 16:50:52.390111   27438 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 16:50:52.455820   27438 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 16:50:52.455955   27438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 16:50:52.667428   27438 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-24 00:50:52.537177427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 16:50:52.709358   27438 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0223 16:50:52.730367   27438 start.go:296] selected driver: docker
	I0223 16:50:52.730388   27438 start.go:857] validating driver "docker" against &{Name:functional-523000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-523000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 16:50:52.730566   27438 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 16:50:52.755348   27438 out.go:177] 
	W0223 16:50:52.818482   27438 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0223 16:50:52.860215   27438 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 status
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f95090d5-0506-4e4c-b7b8-9fa2b560c417] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007205208s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-523000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-523000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-523000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-523000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0045cf9b-67c2-416e-9ca4-1a3c8b11aee4] Pending
helpers_test.go:344: "sp-pod" [0045cf9b-67c2-416e-9ca4-1a3c8b11aee4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0045cf9b-67c2-416e-9ca4-1a3c8b11aee4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009291989s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-523000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-523000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-523000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cbf14263-2581-441e-aee0-43b521e01f59] Pending
helpers_test.go:344: "sp-pod" [cbf14263-2581-441e-aee0-43b521e01f59] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cbf14263-2581-441e-aee0-43b521e01f59] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.010052667s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-523000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh -n functional-523000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 cp functional-523000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd1092034570/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh -n functional-523000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-523000 replace --force -f testdata/mysql.yaml
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-jgp9x" [42d05b6e-63a1-4452-9744-7f6b0e8f5406] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-jgp9x" [42d05b6e-63a1-4452-9744-7f6b0e8f5406] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.067210515s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-523000 exec mysql-888f84dd9-jgp9x -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-523000 exec mysql-888f84dd9-jgp9x -- mysql -ppassword -e "show databases;": exit status 1 (114.460913ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-523000 exec mysql-888f84dd9-jgp9x -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-523000 exec mysql-888f84dd9-jgp9x -- mysql -ppassword -e "show databases;": exit status 1 (116.959523ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-523000 exec mysql-888f84dd9-jgp9x -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-523000 exec mysql-888f84dd9-jgp9x -- mysql -ppassword -e "show databases;": exit status 1 (243.810119ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-523000 exec mysql-888f84dd9-jgp9x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/24885/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo cat /etc/test/nested/copy/24885/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/24885.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo cat /etc/ssl/certs/24885.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/24885.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo cat /usr/share/ca-certificates/24885.pem"
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/248852.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo cat /etc/ssl/certs/248852.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/248852.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo cat /usr/share/ca-certificates/248852.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-523000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-523000 ssh "sudo systemctl is-active crio": exit status 1 (616.226595ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 version -o=json --components
functional_test.go:2235: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 version -o=json --components: (1.003036694s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-523000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-523000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-523000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-523000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/google-containers/addon-resizer      | functional-523000 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-523000 | 41cc560bf34f2 | 1.24MB |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-523000 | ee0570d4afe2b | 30B    |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
2023/02/23 16:51:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-523000 image ls --format json:
[{"id":"41cc560bf34f2fcfe7ff2bb127b4de66df434fa7587188b4226e94b227bc98fe","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-523000"],"size":"1240000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","
repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-523000"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests"
:[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"ee0570d4afe2bb94325369fe50ee574f9d4f2fe019363eb730ff382dfcd29b4b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-523000"],"size":"30"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"fce3
26961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-523000 image ls --format yaml:
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-523000
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: ee0570d4afe2bb94325369fe50ee574f9d4f2fe019363eb730ff382dfcd29b4b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-523000
size: "30"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-523000 ssh pgrep buildkitd: exit status 1 (512.848962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image build -t localhost/my-image:functional-523000 testdata/build
functional_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 image build -t localhost/my-image:functional-523000 testdata/build: (4.417491454s)
functional_test.go:317: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-523000 image build -t localhost/my-image:functional-523000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in b1c68584271c
Removing intermediate container b1c68584271c
---> fd6e106411a4
Step 3/3 : ADD content.txt /
---> 41cc560bf34f
Successfully built 41cc560bf34f
Successfully tagged localhost/my-image:functional-523000
functional_test.go:320: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-523000 image build -t localhost/my-image:functional-523000 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.724415163s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-523000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:493: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-523000 docker-env) && out/minikube-darwin-amd64 status -p functional-523000"
functional_test.go:493: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-523000 docker-env) && out/minikube-darwin-amd64 status -p functional-523000": (1.315440581s)
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-523000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image load --daemon gcr.io/google-containers/addon-resizer:functional-523000
functional_test.go:352: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 image load --daemon gcr.io/google-containers/addon-resizer:functional-523000: (3.510425045s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image load --daemon gcr.io/google-containers/addon-resizer:functional-523000
functional_test.go:362: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 image load --daemon gcr.io/google-containers/addon-resizer:functional-523000: (2.113682657s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.528060212s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-523000
functional_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image load --daemon gcr.io/google-containers/addon-resizer:functional-523000
functional_test.go:242: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 image load --daemon gcr.io/google-containers/addon-resizer:functional-523000: (4.066690611s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image save gcr.io/google-containers/addon-resizer:functional-523000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 image save gcr.io/google-containers/addon-resizer:functional-523000 /Users/jenkins/workspace/addon-resizer-save.tar: (2.037640984s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image rm gcr.io/google-containers/addon-resizer:functional-523000
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (2.275113923s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-523000
functional_test.go:421: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 image save --daemon gcr.io/google-containers/addon-resizer:functional-523000
functional_test.go:421: (dbg) Done: out/minikube-darwin-amd64 -p functional-523000 image save --daemon gcr.io/google-containers/addon-resizer:functional-523000: (3.403629469s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-523000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-523000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-523000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [201d8c32-a4e3-429d-a079-dd3e0d900388] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0223 16:50:17.729690   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [201d8c32-a4e3-429d-a079-dd3e0d900388] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.009030836s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 service list -o json
functional_test.go:1552: Took "620.440952ms" to run "out/minikube-darwin-amd64 -p functional-523000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-523000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-523000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 27100: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1312: Took "413.093086ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1326: Took "68.423715ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1363: Took "455.429323ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1376: Took "67.71894ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-523000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1146679216/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677199839878304000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1146679216/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677199839878304000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1146679216/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677199839878304000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1146679216/001/test-1677199839878304000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-523000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.086543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 24 00:50 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 24 00:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 24 00:50 test-1677199839878304000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh cat /mount-9p/test-1677199839878304000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-523000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e4153186-4a35-49a8-88b3-da2e47b27466] Pending
helpers_test.go:344: "busybox-mount" [e4153186-4a35-49a8-88b3-da2e47b27466] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e4153186-4a35-49a8-88b3-da2e47b27466] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e4153186-4a35-49a8-88b3-da2e47b27466] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00993512s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-523000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-523000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port1146679216/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-523000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3720205932/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-523000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (439.762723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-523000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3720205932/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-523000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-523000 ssh "sudo umount -f /mount-9p": exit status 1 (377.572173ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-523000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-523000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3720205932/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.35s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-523000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-523000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-523000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-243000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-243000: (2.27978303s)
--- PASS: TestImageBuild/serial/NormalBuild (2.28s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-243000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-243000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-243000
E0223 16:51:39.651894   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-702000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0223 16:58:55.809998   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-702000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (44.407832849s)
--- PASS: TestJSONOutput/start/Command (44.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-702000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-702000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-702000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-702000 --output=json --user=testUser: (5.817983851s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.74s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-573000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-573000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (349.269596ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5f2ae388-4233-41a4-8974-680e77e82290","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-573000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6fab516d-9361-4494-bb47-ce019522c571","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"7a250f66-d6db-4a73-be22-9262f39ccd0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig"}}
	{"specversion":"1.0","id":"6be7c663-d5e7-46ed-96f6-96eacf82f2cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"d09a077c-0ac5-4811-abdc-4f4068e83290","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fad6d712-6404-48e9-b964-7882fb16a002","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube"}}
	{"specversion":"1.0","id":"635c2a8d-d22b-4603-92ad-f11480f4d3a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cdb6d33c-5f82-4482-a7ef-11ae06d94ea7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-573000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-573000
--- PASS: TestErrorJSONOutput (0.74s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-109000 --network=
E0223 16:59:44.881278   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-109000 --network=: (29.350871343s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-109000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-109000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-109000: (2.547414599s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-259000 --network=bridge
E0223 17:00:12.572498   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-259000 --network=bridge: (28.228632686s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-259000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-259000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-259000: (2.506499158s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.79s)

                                                
                                    
x
+
TestKicExistingNetwork (30.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-185000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-185000 --network=existing-network: (28.18894587s)
helpers_test.go:175: Cleaning up "existing-network-185000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-185000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-185000: (2.401008153s)
--- PASS: TestKicExistingNetwork (30.95s)

                                                
                                    
x
+
TestKicCustomSubnet (30.31s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-280000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-280000 --subnet=192.168.60.0/24: (27.636067477s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-280000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-280000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-280000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-280000: (2.611565505s)
--- PASS: TestKicCustomSubnet (30.31s)

                                                
                                    
x
+
TestKicStaticIP (35.91s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-766000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-766000 --static-ip=192.168.200.200: (33.081084936s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-766000 ip
helpers_test.go:175: Cleaning up "static-ip-766000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-766000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-766000: (2.598564104s)
--- PASS: TestKicStaticIP (35.91s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (70.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-594000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-594000 --driver=docker : (34.596318493s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-596000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-596000 --driver=docker : (28.671892647s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-594000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-596000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-596000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-596000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-596000: (2.594637711s)
helpers_test.go:175: Cleaning up "first-594000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-594000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-594000: (2.610124539s)
--- PASS: TestMinikubeProfile (70.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-491000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-491000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.118188994s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-491000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-502000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-502000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.403510565s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-502000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.13s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-491000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-491000 --alsologtostderr -v=5: (2.130619378s)
--- PASS: TestMountStart/serial/DeleteFirst (2.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-502000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-502000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-502000: (1.568732149s)
--- PASS: TestMountStart/serial/Stop (1.57s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-502000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-502000: (5.172793418s)
E0223 17:03:55.720519   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (6.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-502000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-384000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0223 17:04:44.790768   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-384000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m17.915429424s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-384000 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-384000 -v 3 --alsologtostderr: (24.039134542s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.02s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp testdata/cp-test.txt multinode-384000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp multinode-384000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile4117761037/001/cp-test_multinode-384000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp multinode-384000:/home/docker/cp-test.txt multinode-384000-m02:/home/docker/cp-test_multinode-384000_multinode-384000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m02 "sudo cat /home/docker/cp-test_multinode-384000_multinode-384000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp multinode-384000:/home/docker/cp-test.txt multinode-384000-m03:/home/docker/cp-test_multinode-384000_multinode-384000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m03 "sudo cat /home/docker/cp-test_multinode-384000_multinode-384000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp testdata/cp-test.txt multinode-384000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp multinode-384000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile4117761037/001/cp-test_multinode-384000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp multinode-384000-m02:/home/docker/cp-test.txt multinode-384000:/home/docker/cp-test_multinode-384000-m02_multinode-384000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000 "sudo cat /home/docker/cp-test_multinode-384000-m02_multinode-384000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp multinode-384000-m02:/home/docker/cp-test.txt multinode-384000-m03:/home/docker/cp-test_multinode-384000-m02_multinode-384000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m03 "sudo cat /home/docker/cp-test_multinode-384000-m02_multinode-384000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp testdata/cp-test.txt multinode-384000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp multinode-384000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile4117761037/001/cp-test_multinode-384000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp multinode-384000-m03:/home/docker/cp-test.txt multinode-384000:/home/docker/cp-test_multinode-384000-m03_multinode-384000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000 "sudo cat /home/docker/cp-test_multinode-384000-m03_multinode-384000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 cp multinode-384000-m03:/home/docker/cp-test.txt multinode-384000-m02:/home/docker/cp-test_multinode-384000-m03_multinode-384000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 ssh -n multinode-384000-m02 "sudo cat /home/docker/cp-test_multinode-384000-m03_multinode-384000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-384000 node stop m03: (1.506519496s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-384000 status: exit status 7 (759.07823ms)

                                                
                                                
-- stdout --
	multinode-384000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-384000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-384000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-384000 status --alsologtostderr: exit status 7 (744.42002ms)

                                                
                                                
-- stdout --
	multinode-384000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-384000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-384000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 17:06:13.585279   31238 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:06:13.585466   31238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:06:13.585471   31238 out.go:309] Setting ErrFile to fd 2...
	I0223 17:06:13.585475   31238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:06:13.585595   31238 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:06:13.585773   31238 out.go:303] Setting JSON to false
	I0223 17:06:13.585797   31238 mustload.go:65] Loading cluster: multinode-384000
	I0223 17:06:13.585852   31238 notify.go:220] Checking for updates...
	I0223 17:06:13.586084   31238 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:06:13.586098   31238 status.go:255] checking status of multinode-384000 ...
	I0223 17:06:13.586471   31238 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:06:13.643308   31238 status.go:330] multinode-384000 host status = "Running" (err=<nil>)
	I0223 17:06:13.643337   31238 host.go:66] Checking if "multinode-384000" exists ...
	I0223 17:06:13.643586   31238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000
	I0223 17:06:13.700936   31238 host.go:66] Checking if "multinode-384000" exists ...
	I0223 17:06:13.701198   31238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:06:13.701263   31238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:06:13.758441   31238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58127 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000/id_rsa Username:docker}
	I0223 17:06:13.849709   31238 ssh_runner.go:195] Run: systemctl --version
	I0223 17:06:13.854566   31238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:06:13.864160   31238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-384000
	I0223 17:06:13.921401   31238 kubeconfig.go:92] found "multinode-384000" server: "https://127.0.0.1:58131"
	I0223 17:06:13.921426   31238 api_server.go:165] Checking apiserver status ...
	I0223 17:06:13.921468   31238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 17:06:13.931720   31238 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1883/cgroup
	W0223 17:06:13.939883   31238 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1883/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0223 17:06:13.939950   31238 ssh_runner.go:195] Run: ls
	I0223 17:06:13.943998   31238 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58131/healthz ...
	I0223 17:06:13.949723   31238 api_server.go:278] https://127.0.0.1:58131/healthz returned 200:
	ok
	I0223 17:06:13.949734   31238 status.go:421] multinode-384000 apiserver status = Running (err=<nil>)
	I0223 17:06:13.949744   31238 status.go:257] multinode-384000 status: &{Name:multinode-384000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 17:06:13.949756   31238 status.go:255] checking status of multinode-384000-m02 ...
	I0223 17:06:13.949989   31238 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Status}}
	I0223 17:06:14.007087   31238 status.go:330] multinode-384000-m02 host status = "Running" (err=<nil>)
	I0223 17:06:14.007109   31238 host.go:66] Checking if "multinode-384000-m02" exists ...
	I0223 17:06:14.007371   31238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384000-m02
	I0223 17:06:14.064492   31238 host.go:66] Checking if "multinode-384000-m02" exists ...
	I0223 17:06:14.064751   31238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 17:06:14.064809   31238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384000-m02
	I0223 17:06:14.124741   31238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58195 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/multinode-384000-m02/id_rsa Username:docker}
	I0223 17:06:14.215980   31238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 17:06:14.225690   31238 status.go:257] multinode-384000-m02 status: &{Name:multinode-384000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0223 17:06:14.225723   31238 status.go:255] checking status of multinode-384000-m03 ...
	I0223 17:06:14.226021   31238 cli_runner.go:164] Run: docker container inspect multinode-384000-m03 --format={{.State.Status}}
	I0223 17:06:14.283152   31238 status.go:330] multinode-384000-m03 host status = "Stopped" (err=<nil>)
	I0223 17:06:14.283188   31238 status.go:343] host is not running, skipping remaining checks
	I0223 17:06:14.283197   31238 status.go:257] multinode-384000-m03 status: &{Name:multinode-384000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-384000 node start m03 --alsologtostderr: (11.861946662s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-384000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-384000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-384000: (23.042975216s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-384000 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-384000 --wait=true -v=8 --alsologtostderr: (1m30.11258932s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-384000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-384000 node delete m03: (5.253683223s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-384000 stop: (21.600427605s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-384000 status: exit status 7 (160.157419ms)

                                                
                                                
-- stdout --
	multinode-384000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-384000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-384000 status --alsologtostderr: exit status 7 (157.23188ms)

                                                
                                                
-- stdout --
	multinode-384000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-384000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 17:08:48.434362   31910 out.go:296] Setting OutFile to fd 1 ...
	I0223 17:08:48.434530   31910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:08:48.434535   31910 out.go:309] Setting ErrFile to fd 2...
	I0223 17:08:48.434539   31910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 17:08:48.434650   31910 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
	I0223 17:08:48.434828   31910 out.go:303] Setting JSON to false
	I0223 17:08:48.434852   31910 mustload.go:65] Loading cluster: multinode-384000
	I0223 17:08:48.434903   31910 notify.go:220] Checking for updates...
	I0223 17:08:48.435157   31910 config.go:182] Loaded profile config "multinode-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 17:08:48.435169   31910 status.go:255] checking status of multinode-384000 ...
	I0223 17:08:48.435553   31910 cli_runner.go:164] Run: docker container inspect multinode-384000 --format={{.State.Status}}
	I0223 17:08:48.490090   31910 status.go:330] multinode-384000 host status = "Stopped" (err=<nil>)
	I0223 17:08:48.490107   31910 status.go:343] host is not running, skipping remaining checks
	I0223 17:08:48.490113   31910 status.go:257] multinode-384000 status: &{Name:multinode-384000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 17:08:48.490139   31910 status.go:255] checking status of multinode-384000-m02 ...
	I0223 17:08:48.490392   31910 cli_runner.go:164] Run: docker container inspect multinode-384000-m02 --format={{.State.Status}}
	I0223 17:08:48.545999   31910 status.go:330] multinode-384000-m02 host status = "Stopped" (err=<nil>)
	I0223 17:08:48.546030   31910 status.go:343] host is not running, skipping remaining checks
	I0223 17:08:48.546040   31910 status.go:257] multinode-384000-m02 status: &{Name:multinode-384000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-384000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0223 17:08:55.717148   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-384000 --wait=true -v=8 --alsologtostderr --driver=docker : (52.528957442s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-384000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-384000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-384000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-384000-m02 --driver=docker : exit status 14 (393.279241ms)

                                                
                                                
-- stdout --
	* [multinode-384000-m02] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-384000-m02' is duplicated with machine name 'multinode-384000-m02' in profile 'multinode-384000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-384000-m03 --driver=docker 
E0223 17:09:44.787015   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-384000-m03 --driver=docker : (29.492851003s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-384000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-384000: exit status 80 (576.083407ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-384000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-384000-m03 already exists in multinode-384000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-384000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-384000-m03: (2.432182416s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.94s)

                                                
                                    
x
+
TestPreload (143.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0223 17:11:07.837783   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m7.874149051s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-244000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-244000 -- docker pull gcr.io/k8s-minikube/busybox: (2.658636892s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-244000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-244000: (10.849017468s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-244000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-244000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (59.065570999s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-244000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-244000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-244000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-244000: (2.671180626s)
--- PASS: TestPreload (143.53s)

                                                
                                    
x
+
TestScheduledStopUnix (103.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-656000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-656000 --memory=2048 --driver=docker : (29.053436746s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-656000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-656000 -n scheduled-stop-656000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-656000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-656000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-656000 -n scheduled-stop-656000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-656000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-656000 --schedule 15s
E0223 17:13:55.714911   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-656000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-656000: exit status 7 (107.784534ms)

                                                
                                                
-- stdout --
	scheduled-stop-656000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-656000 -n scheduled-stop-656000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-656000 -n scheduled-stop-656000: exit status 7 (101.625359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-656000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-656000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-656000: (2.293198293s)
--- PASS: TestScheduledStopUnix (103.19s)

                                                
                                    
x
+
TestSkaffold (64.75s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe696159706 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-121000 --memory=2600 --driver=docker 
E0223 17:14:44.783457   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-121000 --memory=2600 --driver=docker : (27.3779672s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe696159706 run --minikube-profile skaffold-121000 --kube-context skaffold-121000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe696159706 run --minikube-profile skaffold-121000 --kube-context skaffold-121000 --status-check=true --port-forward=false --interactive=false: (17.347879039s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5858b956bc-tzd2x" [d986ccd3-6a30-4ebd-87f7-30f746ac7a86] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012968724s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-856d966947-h7m4m" [52a2daef-a75e-4ad3-9bdc-3c0b6e708ee6] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009535674s
helpers_test.go:175: Cleaning up "skaffold-121000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-121000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-121000: (2.838197358s)
--- PASS: TestSkaffold (64.75s)

                                                
                                    
x
+
TestInsufficientStorage (14.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-478000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-478000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.566786494s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3bebd1b4-9a2e-443c-8c61-5b848a1ab755","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-478000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6372fa71-60bc-4fd1-926b-05156008a7e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"933e53de-58b5-43bc-b823-94932933cd97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig"}}
	{"specversion":"1.0","id":"28dece01-815e-4a24-9f6c-43eccabd4477","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"d42337e7-aa8f-4d64-a241-b1aa919ad591","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b043c24f-b466-4af2-92cd-e9c0db77ac03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube"}}
	{"specversion":"1.0","id":"6dcad8ad-e7d7-437f-be1e-0c6081abf22b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f7d81c5e-6212-410b-a6cc-770101c3ef53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b9d0f484-7b53-44bb-b7e2-e646148cf37e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"06f7c842-8689-45bd-b4fa-d8c5f524e8b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"81abb85c-17cd-4efc-8958-a7a8b280de93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"7ed30b87-68ab-467c-963a-1c33246c296c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-478000 in cluster insufficient-storage-478000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a39639f3-25d6-43d4-a876-baeba9f60cc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc15905c-f7aa-43b7-94a8-958b7f8418ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0dd42392-2fe7-43c0-a6e7-1e9fcaa0b745","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-478000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-478000 --output=json --layout=cluster: exit status 7 (388.771761ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-478000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-478000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:15:47.128445   33733 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-478000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-478000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-478000 --output=json --layout=cluster: exit status 7 (384.81035ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-478000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-478000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 17:15:47.513778   33743 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-478000" does not appear in /Users/jenkins/minikube-integration/15909-24428/kubeconfig
	E0223 17:15:47.522787   33743 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/insufficient-storage-478000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-478000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-478000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-478000: (2.354267809s)
--- PASS: TestInsufficientStorage (14.70s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (22.18s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current28468267/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current28468267/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current28468267/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current28468267/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (22.18s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (17.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current919823934/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current919823934/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current919823934/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current919823934/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (17.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-739000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-739000: (3.539371631s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                    
x
+
TestPause/serial/Start (45.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-252000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0223 17:23:06.198588   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-252000 --memory=2048 --install-addons=false --wait=all --driver=docker : (45.026298549s)
--- PASS: TestPause/serial/Start (45.03s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-252000 --alsologtostderr -v=1 --driver=docker 
E0223 17:23:55.746256   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-252000 --alsologtostderr -v=1 --driver=docker : (50.651495545s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (50.67s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-252000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-252000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-252000 --output=json --layout=cluster: exit status 2 (406.861076ms)

                                                
                                                
-- stdout --
	{"Name":"pause-252000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-252000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-252000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-252000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.62s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-252000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-252000 --alsologtostderr -v=5: (2.624174319s)
--- PASS: TestPause/serial/DeletePaused (2.62s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-252000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-252000: exit status 1 (54.31211ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-252000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-006000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-006000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (429.649789ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-006000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-006000 --driver=docker 
E0223 17:24:44.814607   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-006000 --driver=docker : (30.343070298s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-006000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-006000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-006000 --no-kubernetes --driver=docker : (6.741867978s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-006000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-006000 status -o json: exit status 2 (514.558762ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-006000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-006000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-006000: (2.716246556s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-006000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-006000 --no-kubernetes --driver=docker : (9.06497797s)
--- PASS: TestNoKubernetes/serial/Start (9.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-006000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-006000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (409.585923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
E0223 17:25:22.352943   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-006000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-006000: (1.55286616s)
--- PASS: TestNoKubernetes/serial/Stop (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-006000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-006000 --driver=docker : (8.498082288s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-006000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-006000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (375.673525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
E0223 17:25:50.041484   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (45.490044889s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-152000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-152000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-z6dw9" [2a1e3b55-c066-4c3f-9780-41750da27a28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-z6dw9" [2a1e3b55-c066-4c3f-9780-41750da27a28] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.008089248s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-152000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0223 17:27:47.867515   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m14.094390699s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xfdfr" [b64425c5-647c-4618-bd84-bc03c2066f2c] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017110957s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-152000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-152000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-lpnpn" [161a0660-161c-4577-b972-055f2cbe1c76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-lpnpn" [161a0660-161c-4577-b972-055f2cbe1c76] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006709759s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-152000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
E0223 17:28:55.745411   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (58.316537389s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-152000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (17.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-152000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-s6mm8" [3f8a2109-5657-4c07-b3b2-dc6b2e2f921e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-s6mm8" [3f8a2109-5657-4c07-b3b2-dc6b2e2f921e] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 17.011091051s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (17.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-152000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (48.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (48.052995793s)
--- PASS: TestNetworkPlugins/group/false/Start (48.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
E0223 17:31:22.114502   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:31:22.119584   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:31:22.129637   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:31:22.149922   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:31:22.190078   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:31:22.272232   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:31:22.432450   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:31:22.752950   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:31:23.393762   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (58.965006347s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-152000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-152000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-2xwtl" [b8a29706-c5bd-4bcd-899b-fa3cff6696a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 17:31:24.674062   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:31:27.234362   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-2xwtl" [b8a29706-c5bd-4bcd-899b-fa3cff6696a5] Running
E0223 17:31:32.354732   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.007982984s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-152000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ncrnk" [330c6248-2f6f-4712-beb1-48205b66b1de] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01459696s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-152000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-152000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dh467" [1d08f525-6d66-4e2c-8d3f-e894718cd403] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 17:31:42.596978   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-dh467" [1d08f525-6d66-4e2c-8d3f-e894718cd403] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.009105065s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-152000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
E0223 17:32:03.077807   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (1m0.057755564s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
E0223 17:32:44.038644   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (46.38607802s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gpchl" [2f3eebd0-2c19-42e6-9e46-935288515a9f] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.01354506s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-152000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-152000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-gvrlp" [b760c924-4ec8-43f7-a067-93d5bcca34b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-gvrlp" [b760c924-4ec8-43f7-a067-93d5bcca34b3] Running
E0223 17:33:12.521150   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:33:12.526216   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:33:12.536318   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:33:12.556678   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:33:12.598166   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:33:12.678281   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:33:12.838336   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:33:13.158520   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:33:13.799047   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.008592807s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-152000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-152000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-42dzb" [da9f3eee-ce47-4296-bb49-84f07940f9cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-42dzb" [da9f3eee-ce47-4296-bb49-84f07940f9cd] Running
E0223 17:33:15.079996   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.009699006s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-152000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-152000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (50.170210996s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (45.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0223 17:33:53.483694   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:33:55.748401   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 17:34:05.960088   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-152000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (45.209728538s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (45.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-152000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-152000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-p4fnj" [b66c094c-116d-4fd9-9585-a139559a2133] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-p4fnj" [b66c094c-116d-4fd9-9585-a139559a2133] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.009456862s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-152000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-152000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-b494r" [6a4838f0-3b0b-4761-8d50-8692506e0d1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 17:34:34.446119   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-b494r" [6a4838f0-3b0b-4761-8d50-8692506e0d1c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.012861876s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-152000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-152000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-152000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E0223 18:08:12.641291   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0223 17:35:13.855236   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
E0223 17:35:22.354105   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:35:34.336839   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
E0223 17:35:56.366698   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:36:15.299091   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (1m8.164627033s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-732000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [33c12a96-df69-434f-bae6-1ac06d89aa68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [33c12a96-df69-434f-bae6-1ac06d89aa68] Running
E0223 17:36:22.115754   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:36:24.105002   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:24.110055   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:24.120919   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:24.141631   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:24.181742   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:24.262218   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:24.424138   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:24.744824   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:25.384971   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:26.665320   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.015730732s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-732000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-732000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-732000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-732000 --alsologtostderr -v=3
E0223 17:36:29.225573   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:34.347916   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:36.929843   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:36.936254   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:36.948526   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:36.968973   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:37.011189   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:37.093329   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:37.254279   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:37.574895   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:38.215272   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-732000 --alsologtostderr -v=3: (10.992874102s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000: exit status 7 (102.492455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-732000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (553.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0223 17:36:39.495444   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:42.113130   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:44.588303   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:36:45.404675   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/skaffold-121000/client.crt: no such file or directory
E0223 17:36:47.234442   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:36:49.801502   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:36:57.474840   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:37:05.077471   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:37:17.983313   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:37:37.260250   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/custom-flannel-152000/client.crt: no such file or directory
E0223 17:37:46.073328   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:37:58.961746   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:37:59.130321   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:37:59.135447   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:37:59.145521   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:37:59.166022   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:37:59.206135   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:37:59.286253   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:37:59.447383   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:37:59.769579   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:38:00.410460   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:38:01.690722   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:38:04.251431   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:38:08.347012   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:08.352183   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:08.363213   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:08.385014   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:08.425531   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:08.506366   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:08.666532   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:08.986691   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:09.372882   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:38:09.627179   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:10.907947   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:12.568206   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:38:13.468433   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:18.588835   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:19.613661   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:38:28.829366   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:38.850763   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 17:38:40.094575   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:38:40.254301   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:38:49.310190   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:38:55.794240   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 17:39:07.998729   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-732000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (9m13.152704469s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-732000 -n no-preload-732000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (553.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-977000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-977000 --alsologtostderr -v=3: (1.589181287s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-977000 -n old-k8s-version-977000: exit status 7 (103.608638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-977000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wtvm6" [780af19e-cf8a-403e-a4d2-f48063722c46] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012344841s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wtvm6" [780af19e-cf8a-403e-a4d2-f48063722c46] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009850107s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-732000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-732000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-732000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-732000 -n no-preload-732000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-732000 -n no-preload-732000: exit status 2 (410.531668ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-732000 -n no-preload-732000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-732000 -n no-preload-732000: exit status 2 (417.121384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-732000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-732000 -n no-preload-732000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-732000 -n no-preload-732000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0223 17:46:17.950471   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:17.955585   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:17.965773   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:17.986107   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:18.026251   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:18.106414   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:18.267700   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:18.587972   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:19.230074   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:20.510558   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:22.173161   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:46:23.070871   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:24.162210   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/false-152000/client.crt: no such file or directory
E0223 17:46:28.191313   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:46:36.986076   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/kindnet-152000/client.crt: no such file or directory
E0223 17:46:38.432618   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (44.741616906s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-309000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [514e0640-c1c6-4b32-9d87-3cd82ef62bdd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [514e0640-c1c6-4b32-9d87-3cd82ef62bdd] Running
E0223 17:46:58.913816   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.014668683s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-309000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-309000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-309000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-309000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-309000 --alsologtostderr -v=3: (11.05939172s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000: exit status 7 (102.968895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-309000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (556.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0223 17:47:39.874943   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
E0223 17:47:45.220538   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/auto-152000/client.crt: no such file or directory
E0223 17:47:59.145100   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/flannel-152000/client.crt: no such file or directory
E0223 17:48:08.360532   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/enable-default-cni-152000/client.crt: no such file or directory
E0223 17:48:12.581795   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/calico-152000/client.crt: no such file or directory
E0223 17:48:55.808142   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 17:49:01.797623   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/no-preload-732000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-309000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (9m16.53870425s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-309000 -n embed-certs-309000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (556.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-ftgmr" [50e2444e-a9e7-4a22-b927-796cd03b545e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011677411s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-ftgmr" [50e2444e-a9e7-4a22-b927-796cd03b545e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00787969s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-309000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-309000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-309000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-309000 -n embed-certs-309000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-309000 -n embed-certs-309000: exit status 2 (409.712325ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-309000 -n embed-certs-309000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-309000 -n embed-certs-309000: exit status 2 (415.962083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-309000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-309000 -n embed-certs-309000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-309000 -n embed-certs-309000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-763000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-763000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (46.94977667s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-763000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ab55872e-73fd-4fec-aa3c-60995b67d8a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ab55872e-73fd-4fec-aa3c-60995b67d8a1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.014438527s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-763000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-763000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-763000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-763000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-763000 --alsologtostderr -v=3: (11.029344215s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-763000 -n default-k8s-diff-port-763000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-763000 -n default-k8s-diff-port-763000: exit status 7 (104.75078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-763000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (560.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-763000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-763000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (9m19.935977313s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-763000 -n default-k8s-diff-port-763000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (560.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rg4s2" [74310229-ed87-47ff-8163-e27572e74fcb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013267167s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rg4s2" [74310229-ed87-47ff-8163-e27572e74fcb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008227657s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-763000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-763000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-763000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-763000 -n default-k8s-diff-port-763000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-763000 -n default-k8s-diff-port-763000: exit status 2 (409.385577ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-763000 -n default-k8s-diff-port-763000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-763000 -n default-k8s-diff-port-763000: exit status 2 (409.962132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-763000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-763000 -n default-k8s-diff-port-763000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-763000 -n default-k8s-diff-port-763000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-277000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-277000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (41.534101336s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-277000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0223 18:08:19.707930   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/default-k8s-diff-port-763000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-277000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.099981423s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-277000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-277000 --alsologtostderr -v=3: (11.098638045s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-277000 -n newest-cni-277000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-277000 -n newest-cni-277000: exit status 7 (104.104494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-277000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-277000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
E0223 18:08:55.867999   24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-277000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (24.291817723s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-277000 -n newest-cni-277000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-277000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-277000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-277000 -n newest-cni-277000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-277000 -n newest-cni-277000: exit status 2 (409.872539ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-277000 -n newest-cni-277000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-277000 -n newest-cni-277000: exit status 2 (406.070365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-277000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-277000 -n newest-cni-277000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-277000 -n newest-cni-277000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    

Test skip (18/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 10.135801ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fvwlt" [3db04afd-ea9f-41fe-8a64-87507f5f3cc1] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008976805s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2nnwn" [411c3e45-cf28-4ff5-9243-f2224b5e3ce5] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008568939s
addons_test.go:305: (dbg) Run:  kubectl --context addons-106000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-106000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-106000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.79165299s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.90s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-106000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-106000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-106000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7f251202-7bdc-400f-82af-fb7a00d77fa9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7f251202-7bdc-400f-82af-fb7a00d77fa9] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.009257242s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-106000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.28s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-523000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-523000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-ptpx8" [9000e0fe-746f-4156-b7b3-4f0e0552aba4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-ptpx8" [9000e0fe-746f-4156-b7b3-4f0e0552aba4] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.007053342s
functional_test.go:1614: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-152000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-152000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-152000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-152000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-152000"

                                                
                                                
----------------------- debugLogs end: cilium-152000 [took: 5.497869258s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-152000
--- SKIP: TestNetworkPlugins/group/cilium (6.02s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-718000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-718000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
Copied to clipboard